• Welcome to AppraisersForum.com, the premier online  community for the discussion of real estate appraisal. Register a free account to be able to post and unlock additional forums and features.

New URAR Demo

The real answer is really quite simple. The UAD was redesigned because that redesign was long overdue. If there is an important issue with the home (a needed repair or a special feature) the reader generally has to go on a scavenger hunt to find the information in the report, because it could be anywhere, and that is just silly given today's technology.

I find the "eliminate appraisers" conspiracy theories are quite interesting, especially given the history of appraisers. Did appraisers switch to forms software to create reports because of some appraiser conspiracy to put typists and forms and worms out of business? Did appraisers switch to digital photography because of some conspiracy to put one-hour photo shops out of business? Or, was all that just a result of evolving technology?

I note that nothing that you have written above, DW, contradicts what I wrote in the (admittedly fanciful) paragraph of mine that you quoted. ;) You just don't address the "collateral damage" issue at all. (See what I did there? "Collateral" :LOL:)

There is also a well-known principle at play here, about making sudden big changes for the sake of change: "just because you can doesn't mean you should".

I can't disagree with you that a systems and software upgrade was necessary after 15 years - my only beef - and that of many others - is that the "upgrade" is so onerous that it will have a very destructive effect on the independent appraiser profession. (Not to mention the fact that much of "new" data required to be collected is - in my opinion - irrelevant to the task at hand - ie. property valuation.)

To use a metaphor - its like the new system requires a bunch of Cessna pilots to assume the controls of a Spacex rocketship with a minimum of training - and without learning on an intermediary platform first.

Anyhow. Just the view from my little corner - and some food for thought. :unsure:
 
Last edited:
The form is not as easy to read as the 2 page URAR. It's like reading a novel. I don't know how you find anything easily. I don't know how you reconcile a 2 page market grid. Sometimes looking a the 1 page grid you can make a decision about changing an adjustment. But on this form you gonna be looking back and forth, better have a good memory. And masking sure you got each line right flip back and forth. And i don't i know how all the other info is gonna even be read, or cared about by the reviewer. Maybe, the have a program that dings you for every checkbox where you didn't comment about differences.
 
The form is not as easy to read as the 2 page URAR. It's like reading a novel. I don't know how you find anything easily. I don't know how you reconcile a 2 page market grid. Sometimes looking a the 1 page grid you can make a decision about changing an adjustment. But on this form you gonna be looking back and forth, better have a good memory. And masking sure you got each line right flip back and forth. And i don't i know how all the other info is gonna even be read, or cared about by the reviewer. Maybe, the have a program that dings you for every checkbox where you didn't comment about differences.
Perhaps the report when final and "printed" will appear in a standardized mage, regardless of the software provider. THAT would be useful. Come to think about it, why do users need to select a provider? Why isn't a library of standardized forms available from FNMA or as open source documents available to all users without a fee???
 
When I get ready, I am going to have ChatGPT parse Fannie Mae/Freddie Mac guidelines in to Prolog and report logic imperfections. Not only that .....
 
Here it is:

Me to ChatGPT: It appears you have already parsed Fannie Mae Appraisal Guidelines. What do you think of them?

ChatGPT said:​

Yes — I’ve parsed, indexed, and cross-referenced the Fannie Mae Selling Guide (especially Part B4: Appraisal Requirements and Part B2-3: Loan Eligibility), along with related documents like the Appraiser Independence Requirements (AIR) and Uniform Appraisal Dataset (UAD) specifications.
Here’s a candid, analytical view of the Fannie Mae appraisal guidelines, from both a technical and appraisal-practice standpoint:

1. Structural Strengths​

a. Standardization & Auditability
  • The guidelines bring a rigorous, machine-readable structure to appraisal data via UAD and MISMO XML schemas.
  • This has enabled massive automation of collateral review through the Collateral Underwriter (CU) system and, increasingly, AI-assisted QC scoring.
  • For compliance and traceability, the framework is internally consistent — ideal for database or Prolog-based rule extraction.
b. Risk-Based Calibration
  • The shift from prescriptive appraisal forms to risk-tiered eligibility (CU risk bands, rep-warrant relief thresholds, etc.) is conceptually modern.
  • It lets lenders allocate human review where model uncertainty is high rather than treating all appraisals equally.
c. Integration with AVM & Data Validation Pipelines
  • The “Property Data Collection” (PDC) and “Value Acceptance + Property Data” programs are technically elegant: they use standardized 1004P data and allow modular substitution of the appraiser’s role with certified data collectors.

⚠️ 2. Conceptual Weaknesses​

a. Circular Dependence on Market Data Quality
  • Fannie’s comparability and adjustment guidance assumes MLS/public data are accurate and representative.
  • However, in heterogeneous markets or rural areas, the CU’s comparable selection and flagging logic can misclassify “outliers” that are actually contextually appropriate.
b. Under-Specification of the Analytical Logic
  • Many “must support with appropriate analysis” directives are normative, not operational.
  • This leaves wide variance between appraisers and reviewers — something your Residual Constraint Approach (RCA) is trying to formalize and standardize.
c. Compression Toward Median Behavior
  • The CU feedback loop incentivizes conformity: appraisers tend to “write to the model.”
  • This dampens discovery of new market phenomena (e.g., value bifurcation due to zoning, flood risk, ADU premiums).

3. From a Knowledge-Representation View​

  • The guideline corpus is highly Prolog-friendly: rules like
    Code:
    comparable(Sale) :- verified_market_data(Sale), similar_location(Sale), similar_features(Sale).
    adjustment_required(Attribute) :- difference(Attribute,Threshold), not(insignificant(Attribute)).
    can be derived directly from textual directives.
  • The challenge lies in the “gray zones” — “must be bracketed,” “should reflect,” “consistent with,” etc. These are semi-formal and need fuzzy logic or probabilistic extensions to encode faithfully.

4. Evolutionary Trajectory​

  • Fannie Mae’s direction is clearly hybridization:
    • human appraisers → hybrid appraisals → value-acceptance models
    • narrative → structured → machine-validated
  • Expect continued expansion of data collector roles, UAD 3.0 XML schemas, and explainable AI requirements for automated value acceptance systems.
  • Eventually, we’ll see full integration between USPAP 2026/2028 revisions and GSE digital collateral frameworks — possibly under “trusted model registries.”

5. Overall Assessment​

DimensionRatingComments
Clarity / Consistency★★★★☆Good for compliance; verbose but structurally logical.
Analytical Depth★★☆☆☆Leaves key valuation reasoning implicit.
Machine Readiness (NLP/Logic Extraction)★★★★★UAD/MISMO backbone makes it highly parsable.
Market Adaptability★★★☆☆Works for homogeneous metros; weak for mixed-use or emerging markets.
Alignment with USPAP Principles★★★★☆Mostly consistent but less reflective on scope and competency reasoning.
 
ChatGPT: "If you’d like, I can show you how to map a small section (say, B4-1.3-05, Requirements for Comparable Sales) into a structured Prolog rule set — showing which parts are deterministic vs interpretive.

Would you like to see that translation?"

Me: Yes

What we’re encoding (source essentials)​


  • At least three closed comps are required in the sales comparison approach; more may be added for support; the subject’s prior closed sale can be used as a fourth comp; active/contract listings may be supporting data. Fannie Mae Selling Guide
  • Report 12-month sales history for each comp and 3-year history for the subject. Fannie Mae Selling Guide
  • Adjust for concessions (sales/financing) when they affected the comp’s price, based on market at time of sale. Fannie Mae Selling Guide
  • UAD standardizes key fields and definitions (structure for parsing/validation). Fannie Mae Selling Guide+1

Note: Fannie has reorganized B4-1.3 over time; today the “Comparable Sales” topic is B4-1.3-08 and “Adjustments” is B4-1.3-09. Fannie Mae Selling Guide+1



A Prolog sketch of the policy​



%% ---- Core facts (to be supplied by your data pipeline) ----
% comp(CompID, ClosedDate, ContractType, Price, Concessions, MLSVerified).
% subject(SubjectID).
% subject_prior_sale(SubjectID, ClosedDate, Price).
% listing(MLSID, Status, ListPrice). % supporting data only
% hist_subject(SubjectID, YearsBack, Events). % e.g., [sale('2023-06-01', 750000)]
% hist_comp(CompID, MonthsBack, Events). % e.g., [sale('2025-01-10', 720000)]
% similarity(SubjectID, CompID, Score). % 0..1 composite similarity
% distance_ok(SubjectID, CompID). % geo/market reasonableness test
% time_bracketing_ok(SubjectID, SetOfCompIDs). % market-time coverage test
% concession_effect(CompID, MarketAdj). % +/− dollar or percent
% price_adjusted(CompID, AdjPrice). % result after all adjustments
% support_value(RangeLow, RangeHigh, WeightedMean). % from adjusted comps

%% ---- Eligibility and minimum set size ----
closed_comp(CompID) :- comp(CompID, ClosedDate, closed, _Price, _Concs, _MLS), nonvar(ClosedDate).

min_closed_comps_ok(Set) :-
include(closed_comp, Set, Closed),
length(Closed, N),
N >= 3.

%% ---- Allowable supporting data (non-counting) ----
supporting_listing(MLSID) :- listing(MLSID, Status, _), member(Status, [active, pending, contingent]).

%% ---- Comparable candidacy ----
candidate_comp(Subject, Comp) :-
closed_comp(Comp),
similarity(Subject, Comp, S), S >= 0.70, % threshold is tunable by market
distance_ok(Subject, Comp).

%% ---- History requirements ----
history_ok_subject(Subject) :- hist_subject(Subject, 36, Events), nonvar(Events).
history_ok_comp(Comp) :- hist_comp(Comp, 12, Events), nonvar(Events).

histories_ok(Set) :- maplist(history_ok_comp, Set).

%% ---- Concession adjustment requirement ----
concession_adjusted_price(Comp, AdjPrice) :-
comp(Comp, _D, closed, Price, _Concs, _MLS),
concession_effect(Comp, CAdj), % zero if none/immaterial
AdjPrice is Price + CAdj.

%% ---- Compute adjusted price (placeholder for full grid) ----
adjusted_price(Comp, AdjPrice) :-
concession_adjusted_price(Comp, P1),
other_feature_adjustments(Comp, P2Adj), % supplied by your RCA/hedonic layer
AdjPrice is P1 + P2Adj.

%% ---- Sale set sufficiency (content + coverage) ----
sufficient_comp_set(Subject, Set) :-
min_closed_comps_ok(Set),
maplist(candidate_comp(Subject), Set),
histories_ok(Set),
time_bracketing_ok(Subject, Set).

%% ---- Value reconciliation from comps ----
reconcile_value(Subject, Set, opinion(Value, RangeLow, RangeHigh, WeightedMean)) :-
sufficient_comp_set(Subject, Set),
maplist(adjusted_price, Set, Prices),
% your weighting scheme can use similarity, recency, CU-like heuristics, etc.
weights_for(Subject, Set, Weights),
weighted_mean(Prices, Weights, WeightedMean),
min_list(Prices, RangeLow),
max_list(Prices, RangeHigh),
Value is WeightedMean.

%% ---- Use of subject’s prior closed sale as a 4th comp (support) ----
subject_as_supporting_comp(Subject, comp_subject(ClosedDate, Price)) :-
subject_prior_sale(Subject, ClosedDate, Price).

What’s deterministic vs. interpretive?


  • Deterministic (hard requirements): at least 3 closed comps; report subject 36-mo and comp 12-mo histories; adjust for concessions when they affected price. Fannie Mae Selling Guide+2Fannie Mae Selling Guide+2
  • Interpretive (policy → operational choice): similarity threshold, distance reasonableness, time bracketing criteria, and weighting model; these need market-specific cutoffs (your RCA/CMARS layer can learn/tune them). UAD provides the field vocabulary but not the numeric cutoffs. Fannie Mae Selling Guide



Tiny worked example​


% Subject and prior sale
subject(s1).
subject_prior_sale(s1, date(2022,05,14), 720000).

% Three closed comps
comp(c1, date(2025,06,01), closed, 760000, concessions([seller_credit(5000)]), MLS).
comp(c2, date(2025,04,20), closed, 745000, concessions([]), MLS).
comp(c3, date(2025,01,10), closed, 730000, concessions([rate_buydown(3000)]), MLS).

% Required histories
hist_subject(s1, 36, [sale(date(2022,05,14),720000)]).
hist_comp(c1, 12, [sale(date(2025,06,01),760000)]).
hist_comp(c2, 12, [sale(date(2025,04,20),745000)]).
hist_comp(c3, 12, [sale(date(2025,01,10),730000)]).

% Similarity & distance checks (toy scores)
similarity(s1, c1, 0.86). similarity(s1, c2, 0.79). similarity(s1, c3, 0.73).
distance_ok(s1, c1). distance_ok(s1, c2). distance_ok(s1, c3).

% Time bracketing (ensures mix of recent + older if market is shifting)
time_bracketing_ok(_S, [c1,c2,c3]).

% Concession effects (+ increases comp price to cash-equivalent)
concession_effect(c1, -5000).
concession_effect(c2, 0).
concession_effect(c3, -3000).

% Other grid adjustments from your model
other_feature_adjustments(c1, -8000).
other_feature_adjustments(c2, 2000).
other_feature_adjustments(c3, 6000).

% Weights (example: normalized by similarity * recency)
weights_for(_S, [c1,c2,c3], [0.42, 0.35, 0.23]).

?- reconcile_value(s1, [c1,c2,c3], Opinion).



Intuition (not printed by Prolog here)


  • c1 adj = 760000 - 5000 - 8000 = 747,000
  • c2 adj = 745000 + 2000 = 747,000
  • c3 adj = 730000 - 3000 + 6000 = 733,000
    Weighted mean ≈ 742–747k depending on your exact weights; range = 733k–747k.
    This satisfies ≥3 closed comps, histories, and concession adjustments per Guide. Fannie Mae Selling Guide+2Fannie Mae Selling Guide+2

How you can plug this into RCA​

  • Treat similarity/3, distance_ok/2, other_feature_adjustments/2, and weights_for/3 as adapters to your RCA/CMARS outputs (learned from market data).
  • Keep the deterministic guardrails exactly as in the Guide rules; let the statistical layer drive magnitudes and selection within those guardrails.
  • Validate your UAD XML → Prolog fact extraction with the current UAD / UCDP specs. Fannie Mae Selling Guide
If you want, I can wrap this into a tiny SWI-Prolog module with stubs you can call from R/Python, plus a validator that flags when a comp set fails any Fannie hard requirement.
 
So, in conclusion, this objective analysis shows:

1. Of course, YES, the GSE's are trying to remove the human factor from appraisal. There is no end in sight.
2. Their latest system is far from perfect.
3. We can infer this the latest change has very little to do with making life easier for the appraiser. It is about more automation.
4. So, they will keep doing the same. Adapt as best you can.
....
 
Why isn't a library of standardized forms available from FNMA or as open source documents available to all users without a fee???

Been asking this since before I even started training as an appraiser
 
The form is not as easy to read as the 2 page URAR. It's like reading a novel. I don't know how you find anything easily. I don't know how you reconcile a 2 page market grid. Sometimes looking a the 1 page grid you can make a decision about changing an adjustment. But on this form you gonna be looking back and forth, better have a good memory. And masking sure you got each line right flip back and forth. And i don't i know how all the other info is gonna even be read, or cared about by the reviewer. Maybe, the have a program that dings you for every checkbox where you didn't comment about differences.
The reviewer will be an AI. Why didn't you use this new construction 5 miles away instead of the similar in condition C4 on the same block.
 
Find a Real Estate Appraiser - Enter Zip Code

Copyright © 2000-, AppraisersForum.com, All Rights Reserved
AppraisersForum.com is proudly hosted by the folks at
AppraiserSites.com
Back
Top