ChatGPT: "If you’d like, I can show you how to map a small section (say,
B4-1.3-05, Requirements for Comparable Sales) into a structured
Prolog rule set — showing which parts are deterministic vs interpretive.
Would you like to see that translation?"
Me: Yes
What we’re encoding (source essentials)
- At least three closed comps are required in the sales comparison approach; more may be added for support; the subject’s prior closed sale can be used as a fourth comp; active/contract listings may be supporting data. Fannie Mae Selling Guide
- Report 12-month sales history for each comp and 3-year history for the subject. Fannie Mae Selling Guide
- Adjust for concessions (sales/financing) when they affected the comp’s price, based on market at time of sale. Fannie Mae Selling Guide
- UAD standardizes key fields and definitions (structure for parsing/validation). Fannie Mae Selling Guide+1
Note: Fannie has reorganized B4-1.3 over time; today the “Comparable Sales” topic is
B4-1.3-08 and “Adjustments” is
B4-1.3-09.
Fannie Mae Selling Guide+1
A Prolog sketch of the policy
%% ---- Core facts (to be supplied by your data pipeline) ----
% comp(CompID, ClosedDate, ContractType, Price, Concessions, MLSVerified).
% subject(SubjectID).
% subject_prior_sale(SubjectID, ClosedDate, Price).
% listing(MLSID, Status, ListPrice). % supporting data only
% hist_subject(SubjectID, YearsBack, Events). % e.g., [sale('2023-06-01', 750000)]
% hist_comp(CompID, MonthsBack, Events). % e.g., [sale('2025-01-10', 720000)]
% similarity(SubjectID, CompID, Score). % 0..1 composite similarity
% distance_ok(SubjectID, CompID). % geo/market reasonableness test
% time_bracketing_ok(SubjectID, SetOfCompIDs). % market-time coverage test
% concession_effect(CompID, MarketAdj). % +/− dollar or percent
% price_adjusted(CompID, AdjPrice). % result after all adjustments
% support_value(RangeLow, RangeHigh, WeightedMean). % from adjusted comps
%% ---- Eligibility and minimum set size ----
closed_comp(CompID) :- comp(CompID, ClosedDate, closed, _Price, _Concs, _MLS), nonvar(ClosedDate).
min_closed_comps_ok(Set) :-
include(closed_comp, Set, Closed),
length(Closed, N),
N >= 3.
%% ---- Allowable supporting data (non-counting) ----
supporting_listing(MLSID) :- listing(MLSID, Status, _), member(Status, [active, pending, contingent]).
%% ---- Comparable candidacy ----
candidate_comp(Subject, Comp) :-
closed_comp(Comp),
similarity(Subject, Comp, S), S >= 0.70, % threshold is tunable by market
distance_ok(Subject, Comp).
%% ---- History requirements ----
history_ok_subject(Subject) :- hist_subject(Subject, 36, Events), nonvar(Events).
history_ok_comp(Comp) :- hist_comp(Comp, 12, Events), nonvar(Events).
histories_ok(Set) :- maplist(history_ok_comp, Set).
%% ---- Concession adjustment requirement ----
concession_adjusted_price(Comp, AdjPrice) :-
comp(Comp, _D, closed, Price, _Concs, _MLS),
concession_effect(Comp, CAdj), % zero if none/immaterial
AdjPrice is Price + CAdj.
%% ---- Compute adjusted price (placeholder for full grid) ----
adjusted_price(Comp, AdjPrice) :-
concession_adjusted_price(Comp, P1),
other_feature_adjustments(Comp, P2Adj), % supplied by your RCA/hedonic layer
AdjPrice is P1 + P2Adj.
%% ---- Sale set sufficiency (content + coverage) ----
sufficient_comp_set(Subject, Set) :-
min_closed_comps_ok(Set),
maplist(candidate_comp(Subject), Set),
histories_ok(Set),
time_bracketing_ok(Subject, Set).
%% ---- Value reconciliation from comps ----
reconcile_value(Subject, Set, opinion(Value, RangeLow, RangeHigh, WeightedMean)) :-
sufficient_comp_set(Subject, Set),
maplist(adjusted_price, Set, Prices),
% your weighting scheme can use similarity, recency, CU-like heuristics, etc.
weights_for(Subject, Set, Weights),
weighted_mean(Prices, Weights, WeightedMean),
min_list(Prices, RangeLow),
max_list(Prices, RangeHigh),
Value is WeightedMean.
%% ---- Use of subject’s prior closed sale as a 4th comp (support) ----
subject_as_supporting_comp(Subject, comp_subject(ClosedDate, Price)) :-
subject_prior_sale(Subject, ClosedDate, Price).
What’s deterministic vs. interpretive?
- Deterministic (hard requirements): at least 3 closed comps; report subject 36-mo and comp 12-mo histories; adjust for concessions when they affected price. Fannie Mae Selling Guide+2Fannie Mae Selling Guide+2
- Interpretive (policy → operational choice): similarity threshold, distance reasonableness, time bracketing criteria, and weighting model; these need market-specific cutoffs (your RCA/CMARS layer can learn/tune them). UAD provides the field vocabulary but not the numeric cutoffs. Fannie Mae Selling Guide
Tiny worked example
% Subject and prior sale
subject(s1).
subject_prior_sale(s1, date(2022,05,14), 720000).
% Three closed comps
comp(c1, date(2025,06,01), closed, 760000, concessions([seller_credit(5000)]), MLS).
comp(c2, date(2025,04,20), closed, 745000, concessions([]), MLS).
comp(c3, date(2025,01,10), closed, 730000, concessions([rate_buydown(3000)]), MLS).
% Required histories
hist_subject(s1, 36, [sale(date(2022,05,14),720000)]).
hist_comp(c1, 12, [sale(date(2025,06,01),760000)]).
hist_comp(c2, 12, [sale(date(2025,04,20),745000)]).
hist_comp(c3, 12, [sale(date(2025,01,10),730000)]).
% Similarity & distance checks (toy scores)
similarity(s1, c1, 0.86). similarity(s1, c2, 0.79). similarity(s1, c3, 0.73).
distance_ok(s1, c1). distance_ok(s1, c2). distance_ok(s1, c3).
% Time bracketing (ensures mix of recent + older if market is shifting)
time_bracketing_ok(_S, [c1,c2,c3]).
% Concession effects (+ increases comp price to cash-equivalent)
concession_effect(c1, -5000).
concession_effect(c2, 0).
concession_effect(c3, -3000).
% Other grid adjustments from your model
other_feature_adjustments(c1, -8000).
other_feature_adjustments(c2, 2000).
other_feature_adjustments(c3, 6000).
% Weights (example: normalized by similarity * recency)
weights_for(_S, [c1,c2,c3], [0.42, 0.35, 0.23]).
?- reconcile_value(s1, [c1,c2,c3], Opinion).
Intuition (not printed by Prolog here)
- c1 adj = 760000 - 5000 - 8000 = 747,000
- c2 adj = 745000 + 2000 = 747,000
- c3 adj = 730000 - 3000 + 6000 = 733,000
Weighted mean ≈ 742–747k depending on your exact weights; range = 733k–747k.
This satisfies ≥3 closed comps, histories, and concession adjustments per Guide. Fannie Mae Selling Guide+2Fannie Mae Selling Guide+2
How you can plug this into RCA
- Treat similarity/3, distance_ok/2, other_feature_adjustments/2, and weights_for/3 as adapters to your RCA/CMARS outputs (learned from market data).
- Keep the deterministic guardrails exactly as in the Guide rules; let the statistical layer drive magnitudes and selection within those guardrails.
- Validate your UAD XML → Prolog fact extraction with the current UAD / UCDP specs. Fannie Mae Selling Guide
If you want, I can wrap this into a tiny SWI-Prolog module with stubs you can call from R/Python, plus a validator that flags when a comp set fails any Fannie hard requirement.