- Joined
- Jun 27, 2017
- Professional Status
- Certified General Appraiser
- State
- California
Others are reporting on this more and more. In 2025, the problem was ChatGPT and other LLMs "Hallucinating" and doing really screwy things.
Now it seems to be getting very lazy and forgetful like people.
Its script is failing -> I ask it why -> It says it is trying to access port 5432, but it is blocked for some reason, so it will change the script to use port 5433. And then 5433 eventually gets blocked, and it goes back to 5432. And back and forth until I ask it to "really" fix the problem. And it says, "Oh sure, you're right, I will fix it this time." And well, it seems to work for a day or so, and then it doesn't work anymore. Or it fixes a bug and introduces another, and back and forth you go until you get after it and explain you can't live with this behavior, and the problem really needs to be fixed. And sometimes, you have to keep after it until it seems to get everything right. Just right. If only you could get it to remember that right way, because inevitably, sooner or later, it will screw it up again. You can tell it to write notes in the code (as Markdown files or notes), which certainly helps. But when things get complicated, i.e., not simple anymore, they can have trouble finding information and start to screw up again.
But at least, I guess, it seems so human, although irritatingly stupid and incompetent. And I can only tell myself it is my fault for not making things clear enough up front, although I am not yet convinced that even that would help. At least I have Alexa and Siri around to say nice things to me and calm me down.
Now it seems to be getting very lazy and forgetful like people.
Its script is failing -> I ask it why -> It says it is trying to access port 5432, but it is blocked for some reason, so it will change the script to use port 5433. And then 5433 eventually gets blocked, and it goes back to 5432. And back and forth until I ask it to "really" fix the problem. And it says, "Oh sure, you're right, I will fix it this time." And well, it seems to work for a day or so, and then it doesn't work anymore. Or it fixes a bug and introduces another, and back and forth you go until you get after it and explain you can't live with this behavior, and the problem really needs to be fixed. And sometimes, you have to keep after it until it seems to get everything right. Just right. If only you could get it to remember that right way, because inevitably, sooner or later, it will screw it up again. You can tell it to write notes in the code (as Markdown files or notes), which certainly helps. But when things get complicated, i.e., not simple anymore, they can have trouble finding information and start to screw up again.
But at least, I guess, it seems so human, although irritatingly stupid and incompetent. And I can only tell myself it is my fault for not making things clear enough up front, although I am not yet convinced that even that would help. At least I have Alexa and Siri around to say nice things to me and calm me down.