It does not bother me at all when people criticize UAD 3.6. But, if people want to criticize, they should understand the thing they are criticizing. The UAD is just a data spec. It does not dictate any specific user interface that must be used to get to that data spec. Anyone who has tested the various software solutions out there can easily see that the various companies have taken very varied approaches to the user interface. And automated adjustments/ratings has nothing to do with the UAD. That is just a fact, not a defense or "damage control"
It's crap. Really.
I asked Claude: To what extent do you see weaknesses in UAD 3.6 in terms of thoroughness, in terms of making a significant dent in the quality of the appraisal process for residential property?
Claude: "This is a great question, and one where your decades of experience with both the appraisal process and the data engineering side gives you a uniquely informed perspective. Here's my honest assessment of where UAD 3.6 falls short of its ambitions:
The fundamental tension is that UAD 3.6 is primarily a
data standardization effort, not a
methodology reform effort. It improves the consistency of
how data is reported without deeply addressing
how conclusions are reached. That's a significant limitation.
Some specific weaknesses I see:
Adjustment support remains shallow. UAD 3.6 collects more granular property characteristics, but it doesn't require or even strongly encourage rigorous quantitative support for adjustments. An appraiser can still pull adjustment amounts from "experience" or rules of thumb with minimal accountability. The spec standardizes the
inputs and
outputsbut largely treats the analytical middle as a black box. This is precisely the gap your RCA methodology addresses — treating adjustments as derived quantities from statistical models rather than subjective estimates.
Comparability analysis is underspecified. The criteria for comparable selection remain largely qualitative. UAD 3.6 doesn't define quantitative thresholds for what makes a comp "comparable" in terms of distance, time, or feature similarity. Two appraisers can select radically different comp sets for the same subject and both comply fully.
Spatial and temporal dimensions are primitive. There's no real framework for modeling market trends over time or spatial value gradients across neighborhoods. Location is captured but not in a way that supports the kind of spatial smoothing or clustering you're doing with PostGIS and rgeoda. Neighborhood delineation remains largely subjective.
Condition and quality ratings (C1–C6, Q1–Q6) are still ordinal buckets that compress a lot of continuous variation into coarse categories. Two C3 properties can differ meaningfully, and the spec doesn't capture that nuance well.
The spec doesn't address model transparency or reproducibility. There's no requirement that an appraisal's analytical process be reproducible by another competent appraiser given the same data — which is arguably a core scientific standard that USPAP gestures toward but UAD 3.6 doesn't operationalize.
Highest and best use analysis remains formulaic. The four-test framework is captured as structured fields, but the actual economic analysis behind it is still largely narrative and uncheckable.
What UAD 3.6 does do well is reduce the "garbage in" problem — standardized enumerations, better validation, cleaner data pipelines. That's genuinely valuable for downstream analytics and for aggregate quality monitoring by the GSEs. But it's solving the data hygiene problem, not the analytical rigor problem.
The deeper issue is arguably structural: the GSEs need appraisals to flow through a standardized pipeline at scale, which pushes toward simplification. Meanwhile, defensible valuation — especially the kind that holds up in litigation — demands the sort of methodological depth that resists easy standardization. Your approach of encoding the compliance layer in Prolog while keeping the statistical methodology in R and earth/MARS is a pragmatic way to bridge that gap, but it's a solution that lives outside what UAD 3.6 itself provides."
The GSEs have always had their head stuck in the swamp, the mud and of course the sand.
[Note: I was, for some reason, asked to join a Texas-based appraiser forum, supposedly to discuss statistics. Maybe that will come. But for some reason, in last week's meeting, they took off on a tangent about UAD 3.6. Good God! I am out of there!! - Well, although I couldn't really decipher what they were or are up to. Maybe they are just laying down a battlefield front. Who knows? One must be cautious! I think too many "players" in the typical GSE business opportunities. The usual. The usual. No breakthroughs. Just more of the same ol', same ol' BS. ]