• Welcome to AppraisersForum.com, the premier online  community for the discussion of real estate appraisal. Register a free account to be able to post and unlock additional forums and features.

20 Minute AI Appraisals Are Coming

How many F/F reports do you think have been submitted that indicated that the current use isn't the HBU? Maybe somewhere between none and much less than 1%, making it a virtual non-issue. Even if the report says "no" on the HBU question, chances are the lender will waive it away.
I can remember only one improved parcel where the existing SFR was not its HBU. It was zoned 2 family and no grandfathering or exceptions, period. There was on side of a street of sfrs that was zoned this way. No rhyme nor reason except that when they rezoned the property one street over that was a busy street to 2 family, they included the improved SFR lots that backed up to them the same. Not a single person at the zoning department thought it was sane, but that is how it was since it had been done before most of them worked there. The bad part was that most of the lots with the sfrs couldn't support a 2 family due to setbacks, lot coverage and parking requirements. My subject was one of those lots. I submitted the report stating it was obvious that the HBU should be "as is", but couldn't be because of current zoning and lack of grandfathering or any exceptions. The current use could continue, but if destroyed the subject couldn't be reconstructed without a zoning variance. I figured the deal was dead, until about 6 weeks later I got a request for a 1004d update. The owners had gone before the zoning board and were granted a variance. Fastest I had ever seen a variance granted in this jurisdiction. 3-6 months was the norm.
 
they are all zoned single family...good luck looking through the city ordinances in 30 seconds :rof:
 
Not just the local rules, but also looking at the property itself.

I always measure and diagram the apparent setbacks and take note of embankments and terraces and elevation changes and such. I don't know if "ADU potential" will become a line item adjustment in the future, but it's possible. It mostly depends on how the market acts. Not on whether the lenders are savvy enough to ask the question.
 
Another example is when the structure straddles the lot line between 2 parcels or (potentially) encroaches into the side setbacks. On paper it looks like 2 lots but the strict local jurisdiction - or the neighbors - might object to them to being split up.

I'm not suggesting any of this will happen, but it could happen. What I dread is the prospect than an AI can be trained to notice things about a property that most appraisers would either miss or simply blow off as being unimportant. Not necessarily now, but at some point in the not-that-distant future.
 
  • Like
Reactions: TC

Scientists discover major differences in how humans and AI 'think' — and the implications could be significant​

News
By Drew Turney published April 1, 2025
Study finds that AI fundamentally lacks the human capability to make creative mental connections, raising warning signs for how we deploy AI tools.





AI models struggle to form analogies when considering complex subjects, like humans can, meaning their use in real-world decision making could be risky. (Image credit: imaginima/Getty Images)

We know that artificial intelligence (AI) can't think the same way as a person, but new research has revealed how this difference might affect AI's decision-making, leading to real-world ramifications humans might be unprepared for.


The study, published Feb. 2025 in the journal Transactions on Machine Learning Research, examined how well large language models (LLMs) can form analogies.


They found that in both simple letter-string analogies and digital matrix problems — where the task was to complete a matrix by identifying the missing digit — humans performed well but AI performance declined sharply.

https://rd.bizrate.com/r/35265629740
J


















While testing the robustness of humans and AI models on story-based analogy problems, the study found the models were susceptible to answer-order effects — differences in responses due to the order of treatments in an experiment — and may have also been more likely to paraphrase.

Altogether, the study concluded that AI models lack “zero-shot” learning abilities, where a learner observes samples from classes that weren't present during training and makes predictions about the class they belong to according to the question.


Related: Punishing AI doesn't stop it from lying and cheating — it just makes it hide better, study shows

Co-author of the study Martha Lewis, assistant professor of neurosymbolic AI at the University of Amsterdam, gave an example of how AI can't perform analogical reasoning as well as humans in letter string problems.

"Letter string analogies have the form of 'if abcd goes to abce, what does ijkl go to?' Most humans will answer 'ijkm', and [AI] tends to give this response too," Lewis told Live Science. "But another problem might be 'if abbcd goes to abcd, what does ijkkl go to? Humans will tend to answer 'ijkl' – the pattern is to remove the repeated element. But GPT-4 tends to get problems [like these] wrong."

Why it matters that AI can't think like humans​

Lewis said that while we can abstract from specific patterns to more general rules, LLMs don't have that capability. "They're good at identifying and matching patterns, but not at generalizing from those patterns."

Most AI applications rely to some extent on volume — the more training data is available, the more patterns are identified. But Lewis stressed pattern-matching and abstraction aren't the same thing. "It's less about what's in the data, and more about how data is used," she added.

RELATED STORIES
—'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it

—If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy

—ChatGPT isn’t 'hallucinating' — it's just churning out BS
To give a sense of the implications, AI is increasingly used in the legal sphere for research, case law analysis and sentencing recommendations. But with a lower ability to make analogies, it may fail to recognize how legal precedents apply to slightly different cases when they arise.

Given this lack of robustness might affect real-world outcomes, the study pointed out that this served as evidence that we need to carefully evaluate AI systems not just for accuracy but also for robustness in their cognitive capabilities.



  1. BW
    Bob Whitcombe19 days ago
    It is still very early in AI development and any such article that purports an AI can "think" at this point in time, needs to be carefully sedated and put back in their room. The algorithms and models are "simple" for AI and of necessity abstractions for a process we really don't understand in human "thinking", much less what enables human consciousness.edited
    reply 0
    share
    report









 
Find a Real Estate Appraiser - Enter Zip Code

Copyright © 2000-, AppraisersForum.com, All Rights Reserved
AppraisersForum.com is proudly hosted by the folks at
AppraiserSites.com
Back
Top