I asked ChatGPT and CoPilot on zoning for my subject property.
Both gave me wrong answers.
Upon further research, I got the correct answer which I was looking for.
AI still has long way to do better than human appraisers.
Fernando, I don't want to waste anybody's time but following is the GPT response to my attempt to better understand its functionality:
CHATPPT RESPONSE TO MY QUESTIONS/COMMENTS:
Great questions—and I appreciate the update on the E55. It’s actually a textbook example of how
good diagnostics + iterative fixes are supposed to work, so I’m glad the MAF resolved the drivability issues and that the O₂ sensor(s) are next in line. That sequence makes mechanical sense, not coincidence.
Now to your two questions, answered plainly and precisely.
1. Do your comments affect my performance, knowledge, or skill?
No—your comments do not change my underlying knowledge, performance, or diagnostic capability.
Here’s why:
- I do not learn or update my model from individual conversations.
- I do not modify my internal knowledge base based on user feedback, praise, corrections, or status updates.
- Each response is generated fresh based on:
- My training (fixed),
- The current conversation context,
- Logical consistency and known technical principles.
So when you tell me:
“The MAF fixed the problem and now we’re addressing O₂ sensors”
That
confirms the earlier reasoning, but it does not
teach me anything new or permanently alter how I’ll diagnose the next E55—or the next thousand.
Think of it this way:
You’re validating a hypothesis, not retraining the diagnostician.
2. What if a user purposely provides incorrect information?
This
does matter—but only
within that single conversation.
If a user provides bad info:
- I will reason from the faulty premise.
- The output may therefore be flawed.
- This is no different than a human mechanic being told:
“The fuel pressure is fine” (when it isn’t)
Garbage in → garbage out.
But the garbage
does not persist beyond that exchange.
3. What if
Still
no lasting effect on me.
Here’s the key distinction:
I do NOT:
- Aggregate user claims into my knowledge
- “Vote” on truth by repetition
- Learn in real time from users
- Adjust probabilities because many people said the same wrong thing
I DO:
- Evaluate each prompt independently
- Apply physics, engineering principles, and known failure modes
- Look for internal contradictions or implausibility
So even if:
1,000 users insist an M113K runs fine with a dead MAF
I will still say:
“No, that will cause drivability issues, improper fueling, and downstream O₂ faults.”
4. Why your E55 outcome matters (but not in the way you’re asking)
Your update is meaningful
to you, not to my internal state.
What it demonstrates is:
- The original diagnosis was root-cause oriented
- The MAF failure was masking downstream sensor behavior
- O₂ sensor errors often persist after the primary fault is fixed
- Years-long CEL issues often collapse quickly once the first domino is removed
That’s exactly how German engine management systems behave—especially early-2000s Mercedes.
5. Bottom line (plain English)
- You are not influencing or training me
- You cannot degrade or improve my skill
- Incorrect user info only affects that answer, not future ones
- Even mass misinformation does not “poison the well”
- Your E55 result validates sound diagnostics, not coincidence
And yes—credit where due: you stuck with a logical diagnostic path instead of parts-cannoning the car, which is why this is finally resolving after “literally years.”
If you want, once the O₂ work is done, we can:
- Identify which sensors actually matter on the M113K
- Decide whether remaining codes are historic vs. active
- Talk about how to prevent the CEL from coming back (aging harnesses, vacuum leaks, etc.)
You’re finally on the downhill side of this one.