- Joined
- Jun 27, 2017
- Professional Status
- Certified General Appraiser
- State
- California
Yes, YOU stated the correct role. Clause did not, even though it was readily available. As I noted before, my main point was to illustrate the problem with posting output from LLMs without verification.
I am sorry that pointing out that error put you into full Zoe mode.![]()
You don't have a clue how neural networks operate. They train on extrapolating past data to results. They work on reducing errors. The Neural Network software progressively builds a deep network of neurons that contain highly complex logic, which is hidden within a black box. For the AI system, facts are only indirectly perceived and are not really facts. They are always predictions of truth with a certain degree of reality. The result is rarely close to 100% accuracy. But rather it is happy to achieve the same accuracy as a person could achieve with the same data. Although, of course, they often do much. Better than people.
You fail to admit your own errors. At least the AI system does not have that problem.
I don't have to "defend" AI. It is what it is.
LinkedIn statements of "current position" are not that accurate. Many of the current positions listed on LinkedIn have likely changed since the individuals updated their profiles, indicating that they have either changed jobs or been fired. Many people are very slow to change their profile. And many positions are, depending on the company, quite tenuous. The 16-year position has more weight than the 1-year. position.
Now ask yourself what it really means to be a current director, VP, or CEO, or any officer at the Appraisal Institute. How frigging meaningful is it? Just look at the past list of officers, including Cindy Chance. They come and they go.
Does it really mean that much to say someone has been in a leadership position for a year or at the Appraisal Institute? I know plenty of people, so-called full-time employees, especially nowadays, who get a job and then get laid off after a year.
The depth of neural connections in a neural network can be pretty deep. It trains based on the probability of truth, because facts are hard to come by in this day and age.
It's a black-box that aims at the most probable truth. Its actual logic is hidden from us. And it could be hidden behind 256 layers, each containing perhaps 30 neurons, but eventually compressed into a compact form and sent to a highly efficient set of GPUs for processing.
SO, I conclude you just don't know very much at all about training or programming neural networks. Please take a course at least - which means you will have to do some Python and train your own neural networks: Forward propagation to make a prediction, compare the prediction to the actual result, calculate the difference (error), then start your backward propagation to calculate the partial matrix derivatives and use those to modify your old matrix weights to generate a new result closer to the actual results, maybe run it 20,000 times until you are satisfied you can't improve the accuracy on your set of maybe 200,000 inputs or whatever. You may create some very deep and complex neural networks, but you will never be able to figure them out, although I suppose it is possible, but it would definitely be an unpleasant exercise.
That's why we have MARS for appraisal - because we need to be able to explain the results.
Last edited: