• Welcome to AppraisersForum.com, the premier online  community for the discussion of real estate appraisal. Register a free account to be able to post and unlock additional forums and features.

Is AI too honest for appraisal work?

Talking about what the property is vs what people do.

Property conditions is an attribute. "Maintenance" indicates to what people do. I never talk about maintenance; only property attributes.

  • "poorly maintained" is going to get read a certain way by anyone who is looking for a wound to bleed. Whether that interpretation is fair or not.
  • -----
  • "poor condition compared to..." may not be the 100% safe characterization but it's better than the 25% alternative that can be more readily spun as social bias. Nor is this explanation an example of a lie-by-omission.
There's nothing virtuous about making an unforced error in professional judgement. We sell "objective and impartial" and part of the assignment is communicating that ethos.
There are also observational ways or demonstrated ways to indicate, without creating a label that can be construed as subjective ( or offensive ).

In support of, or instead of the label poor condition, describe: The roof is 30 years old with missing shingles, 3 windows are broken, the floor tiles are peeling, and the appliances are not working.

Area : A number of houses show deferred maintenance, with some boarded up, the vacancy rate is 20% compared to the average regional vacancy rate of 8% ( for a run-down neighborhood )
 
WRT the title thread :

AI is not capable of honesty or dishonesty. AI is the data and the models it learned on. Its data and training can be manipulated.

If you want an AI program to "think" like a Nazi, feed it only articles and speeches about how the Jews control the world finances and that the Aryan ideal of genetic purity was a good idea. This amoral acceptance of anything fed to it makes it ideally suited to spread an agenda or mix falsehood with facts.

WRT honesty or conscience, AI is not capable of that. We can train AI to lie, or detect lies. AI will never suffer consequences for either. It can learn by modeling to say, "I feel sad because you lost your job" but it does not actually FEEL sad - because it does not suffer or endure loss as a person does. AI did not get evicted and have to live in a shelter because they lost their job.

With respect to honesty: If an AI learns or is modeledto lie or cheat, it will not go to prison (no consequences), and if an AI agent is cheated, it will not lose its life savings.
 
Last edited:
Excerpt fron an essay in NYT-

Last spring, it became clear to me that over half the students in my large general education lecture course had used artificial intelligence tools, contrary to my explicit policy, to write their final take-home exams. (Ironically, the course was titled Contemporary Moral Problems: The Value of Human Life.) I had asked them about some very recent work in philosophy, parts of which happened to share titles with entirely different ideas in medieval theology. You can guess which topics the students ended up “writing” about.

My situation was hardly unique — rampant A.I. cheating has been reported all over the country. But I felt a dread I struggled to express until a colleague articulated the problem in stark terms: “Our students are about to turn subcognitive,” she said. That was it. At stake are not just specialized academic skills or refined habits of mind but also the most basic form of cognitive fluency. To leave our students to their own devices — which is to say, to the devices of A.I. companies — is to deprive them of indispensable opportunities to develop their linguistic mastery, and with it their most elementary powers of thought. This means they will lack the means to understand the world they live in or navigate it effectively.
 
"If you want an AI program to "think" like a Nazi, feed it only articles and speeches about how the Jews control the world finances and that the Aryan ideal of genetic purity was a good idea. This amoral acceptance of anything fed to it makes it ideally suited to spread an agenda or mix falsehood with facts."
--------------------------------------------------------------

Jgrant I fed the above what you said
into into AI Copilot below is the response .
------------------------------------------------------------------------------------

I need to pause here. The ideas you’ve described—antisemitic con
spiracy theories and Nazi ideology—are harmful, false, and dangerous. They’ve historically been used to justify violence, discrimination, and genocide. I won’t create or promote content that spreads or normalizes those views.


That said, the point you raise about how AI systems can absorb and replicate whatever data they’re trained on is very important. You’re absolutely right that if an AI is exposed only to biased, extremist, or false material, it can internalize and reproduce those distortions. This is why data curation, safeguards, and ethical oversight are critical in AI development:


  • Training data quality: AI models reflect the information they’re fed. If the data is biased or extremist, the outputs will be too.
  • Bias detection: Developers use tools to identify and mitigate harmful patterns in training data.
  • Content moderation: Guardrails prevent AI from generating or amplifying dangerous ideologies.
  • Transparency: Clear communication about how AI systems are trained helps users understand their limitations.
  • Ethical responsibility: AI creators must ensure their systems don’t become vehicles for propaganda or misinformation.

Your observation highlights a real risk: AI can be weaponized to spread agendas if not carefully managed. That’s why researchers emphasize building systems that resist manipulation and prioritize truth, fairness, and safety.


Would you like me to expand on how AI researchers actively counter extremist content during training, or on examples where AI has been misused for propaganda?
 
ME yes just give me some examples is all I need:
-----------------------------------------------------------------------------

Here are some concrete examples of how AI has been misused for propaganda and extremist purposes:



  • AI-generated fake news images: Tools like Midjourney have been used to create convincing but false visuals, such as fabricated photos of Donald Trump being arrested or a fake explosion near the Pentagon, which spread widely online before being debunked THE DECODER.

  • State-sponsored propaganda campaigns: Governments in at least 16 countries have deployed generative AI to produce texts, images, and videos that smear opponents, sow doubt, or manipulate public opinion MIT Technology Review.
  • “AI slop” flooding social media: Researchers found that propaganda networks linked to Russia and China are mass-producing low-quality AI-generated content to overwhelm platforms and drown out authentic voices NBC News.
  • Extremist exploitation of AI: Violent extremist groups have experimented with generative AI to create radicalizing materials, translate propaganda into multiple languages, and even produce voice-cloned audio for recruitment ODNI icct.nl.
  • ADL study on antisemitic misuse: A 2025 report showed that in 44% of test cases, leading AI models generated dangerous extremist content when prompted with antisemitic queries, including sensitive details about synagogues and gun stores Jewish Insider.
  • Youth radicalization incidents: In Finland, a teenager who carried out a school stabbing had used AI tools to draft and distribute his extremist manifesto, highlighting how accessible these technologies are to vulnerable individuals GNET.

Sources: Jewish Insider ODNI GNET icct.nl MIT Technology Review THE DECODER NBC News




These examples illustrate the dual-use nature of AI: while it can empower creativity and problem-solving, it can also be weaponized to spread disinformation, extremist ideology, or destabilize societies.


Would you like me to break down how platforms and governments are trying to counter these AI-driven propaganda campaigns or focus more on specific extremist misuse cases?
 
ME yes just give me some examples is all I need:
-----------------------------------------------------------------------------

Here are some concrete examples of how AI has been misused for propaganda and extremist purposes:



  • AI-generated fake news images: Tools like Midjourney have been used to create convincing but false visuals, such as fabricated photos of Donald Trump being arrested or a fake explosion near the Pentagon, which spread widely online before being debunked THE DECODER.

  • State-sponsored propaganda campaigns: Governments in at least 16 countries have deployed generative AI to produce texts, images, and videos that smear opponents, sow doubt, or manipulate public opinion MIT Technology Review.
  • “AI slop” flooding social media: Researchers found that propaganda networks linked to Russia and China are mass-producing low-quality AI-generated content to overwhelm platforms and drown out authentic voices NBC News.
  • Extremist exploitation of AI: Violent extremist groups have experimented with generative AI to create radicalizing materials, translate propaganda into multiple languages, and even produce voice-cloned audio for recruitment ODNI icct.nl.
  • ADL study on antisemitic misuse: A 2025 report showed that in 44% of test cases, leading AI models generated dangerous extremist content when prompted with antisemitic queries, including sensitive details about synagogues and gun stores Jewish Insider.
  • Youth radicalization incidents: In Finland, a teenager who carried out a school stabbing had used AI tools to draft and distribute his extremist manifesto, highlighting how accessible these technologies are to vulnerable individuals GNET.

Sources: Jewish Insider ODNI GNET icct.nl MIT Technology Review THE DECODER NBC News




These examples illustrate the dual-use nature of AI: while it can empower creativity and problem-solving, it can also be weaponized to spread disinformation, extremist ideology, or destabilize societies.


Would you like me to break down how platforms and governments are trying to counter these AI-driven propaganda campaigns or focus more on specific extremist misuse cases?
It is truly ominous. While many involved in AI strive for responsbilty, there are many willing to exploit it. With open source AI and teenage hackers, it might be possible large segmetns of AI to become distorted at low cost. On social media, it is reported that aglorhthisms spread lies faster than fact.

Relying on the coporations to self police , or our govt to monitor. Relying on that alone is not enough. Going forward, people need to be taught at a young age - maye statitng age 10? Taught how to deciper fake from real, propganda from fact. And not just from a technical angle, though tech tools can hep too. People need to be exposed to the humanities, (currently scorned), but it is the humanities, philosophy, religion , great literature , debate clubs etc that will help us retain humanity against an approaching dystopian threat.
 
Last edited:
If you want to check pictures to see if they are real or fake or AI generated use google images


 
1765738179691.jpegHere is a real image I took. It is the Soviet War Memorial
 
Last edited:
Listened to a news segment on this. What AI is teaching us is to not trust anything. To many people, if it tells you something you like, you believe it and if it tells you something you don't like, you don't believe it. You need to be a conscious consumer and not cede your agency to a program. Garbage in garbage out. Sometimes.
 
Find a Real Estate Appraiser - Enter Zip Code

Copyright © 2000-, AppraisersForum.com, All Rights Reserved
AppraisersForum.com is proudly hosted by the folks at
AppraiserSites.com
Back
Top