- Joined
- Jan 15, 2002
- Professional Status
- Certified General Appraiser
- State
- California
Data qualification is where its at, regardless of which valuation model you use. The 300-unit subdivision scenario with few variations among the individual datapoints simplifies that task to the point that any valuation model will work. So simple even a monkey and a dartboard will work.
Conversely, the use of raw and unqualified data complicates the task; and using larger quantities of raw and unqualified data is only incrementally better.
We already have a ton of experience using an appraisal valuation model (which is what the Fannie grids are), and that model requires the appraiser to spend a considerable amount of time and effort to qualify the data they're using. If you had to break the amount of "appraisal development" time by activity, the analyses/comparisons would probably amount to less than 5% of the total. The other 95% is split between data indentification and qualification. THAT's where most of the appraising occurs, not in the last 5 minutes when we're refining the value range indicated by the most direct comparables that we're presenting. If we made zero adjustments to those comparables we could still emulate what the buyers and sellers are doing in real life because that's what they do.
So really, what the quants are trying to do is to refine the adding machine part of the analysis, which IRL is usually the fastest and simplest step in our process.
It doesn't matter how much time/effort you spend on refining the manner in which you develop and apply line-item adjustments because all that happens with a less-than-optimal combination of adjustments is that the appraiser ends up doing more qualitative instead of less qualitative in their final reconciliation.
ZAIO had a plan where the appraiser nominally qualified all their data prior to using any of it in their AVM. The fundamental concept is certainly valid but the manner in which they were doing it was inefficient. If they had limited their pre-qualification protocols to the properties being listed in the MLS they wouldn't have been wasting so much time/effort.
Conversely, the use of raw and unqualified data complicates the task; and using larger quantities of raw and unqualified data is only incrementally better.
We already have a ton of experience using an appraisal valuation model (which is what the Fannie grids are), and that model requires the appraiser to spend a considerable amount of time and effort to qualify the data they're using. If you had to break the amount of "appraisal development" time by activity, the analyses/comparisons would probably amount to less than 5% of the total. The other 95% is split between data indentification and qualification. THAT's where most of the appraising occurs, not in the last 5 minutes when we're refining the value range indicated by the most direct comparables that we're presenting. If we made zero adjustments to those comparables we could still emulate what the buyers and sellers are doing in real life because that's what they do.
So really, what the quants are trying to do is to refine the adding machine part of the analysis, which IRL is usually the fastest and simplest step in our process.
It doesn't matter how much time/effort you spend on refining the manner in which you develop and apply line-item adjustments because all that happens with a less-than-optimal combination of adjustments is that the appraiser ends up doing more qualitative instead of less qualitative in their final reconciliation.
ZAIO had a plan where the appraiser nominally qualified all their data prior to using any of it in their AVM. The fundamental concept is certainly valid but the manner in which they were doing it was inefficient. If they had limited their pre-qualification protocols to the properties being listed in the MLS they wouldn't have been wasting so much time/effort.