Which version are you using if you're even using gpt at all? The newer versions are coming from everywhere very quickly.
I primarily use ChatGPT 4.0 Plus, which is $20/month. I also use bard.google.com which is free. Note that Bard is based on the latest state of the Internet - so it has access to all the latest research and information publicly available on the Internet. ChatGPT can only access the internet up to 2021.
However, it appears that ChatGPT is somewhat more sophisticated and advanced in many areas, although I (nor most people) can't really say - because to draw any firm conclusion would require an immense amount of broadly targeted testing.
Anybody can download Open AI's code and run it on their own databases. If you had a broker connection to a good MLS, as I do, you could with time download most of the data, including pictures, and go to work doing your own analysis. However, I don't think the results would be that great. An appraiser with years of working in a specific neighborhood, with access to the same MLS data and good analysis tools like MARS is way ahead.
1. These ChatGPT programs scan existing information looking for time-sensitive patterns or causal patterns. For example, it could look at a research paper, translate the text into numerical vectors and symbols and try to determine causal relationships. It will never really understand underlying concepts to any depth and its understanding is very prone to error and confusion. It is not a very good logic machine. It may learn one thing here and another thing there and both are contradictory - it will eventually just assign probabilities. Possible relation X has a probability of 40% of being true, and relation Y has a probability of 30%. If asked for output that requires consideration of these two relations, and it has to make a choice between the two increating an explanation, it will likely choose X 40% of the time and Y 30% of the time.
2. Thus when you ask it to regenerate a response multiple times in a row, it will often give you different responses by using different "random seeds" to start with. So, it is doing this by selecting probabilities of different patterns it has minded from data, and presenting a resulting pattern - that it really has very little understanding of. It is hit-and-miss. It is sophisticated guessing. (People who do good on tests, are often just good guessers .... )
3. What is lacking in these programs is a true understanding of the structures and causal relationships in the World, the ability to test for logic, e.g. contradictions, to compare its data to the laws of science and math, the ability to recognize the contradictions within its own arguments and even more the ability to correct those contradictions.
4. ChatGPT suffers from a lack of a good and tight Knowledge Base. Note, it was the hope of AI researchers in the 80s to create extensive Knowledge Bases and Expert Systems. The reality was that it was far more difficult to do than expected. There aren't enough sufficiently smart people in the world to create such error-free systems for all of the kinds of tasks they can be put to. Thus the hope has been moved to creating systems that can think for themselves and discover Truth without human intervention. - This may be further from reality than what many currently hope. Hard to say. These systems are good to get people started going in a certain new direction - to kickstart them. But they are a long way from handling real-world tasks - especially those that require ground on the feet, eyes, ears, and flexible interaction with the real world.
One telling experience I had, as already mentioned, was trying to get ChatGPT to create a mathematical proof. I finally coaxed to generate the first 4 steps correctly, but the 5th wasn't correct. I asked it to please correct the 5th step because it wasn't correct. It regenerated the first 4 steps, however, with some new errors and different terminology. What it did, didn't make sense. You could tell, it would start from scratch and regenerate everything according to probabilities it thought were most prevalent. So, in other words, it couldn't begin to understand the actual logic of the arguments - it was just guessing based on the probabilities it assigned from scanning outside material.
- Oh by the way, if you get too nit-picky in trying to tell it exactly what to do, it will just ignore you. Because - it really doesn't allow for that.