- Joined
- Jun 27, 2017
- Professional Status
- Certified General Appraiser
- State
- California
Yes. I said that already. Experience in itself, is certainly not viable as the foundation for a fact driven analysis. (But nothing replaces experience either, whether we are talking about life altering surgery, or professional analysis opinions by somebody who has "been there and done that" for years.)
Experience "should" also recognize corrupt data, which abounds today. You've not been a Tennessee appraiser in decades, so you've not had reason to review how questionable some of the data is that spews from our MLS. The same data used in groupings to support many adjustments and is sent to the Billow and Fluff sites used by everybody, including the GSE's. Data that's often distorted by information we see all the time individually. GLA is often wrong - Concessions are not reported - Last minute buyer agent commissions added to the price at the 11th hour - Again, these things can be caught individually, but not in bulk data and it distorts many things.
"Corrupt Data?" You have to have a pretty deep understanding of the data and statistics, including plenty of experience in building models from domain data to really understand what you are talking about.
1. First, what exactly do you mean by "Corrupt Data?"
a) Do you mean data that someone has incorrectly modified to effect a certain outcome? That can happen of course. We might expect that from data provided by a real estate agent.
b) Or, do you mean data that is in error due to some clerical mistake?
c) Or data that is in error through systemic failures in communication (updating) between different government agencies (e.g. owner/builder changes to construction plans not communicated to the official recorder)?
d) Or some combination of the above, or some other source of error or data inconsistency?
2. Are these errors you refer to random, systemic, or intentional changes?
3. The point is that random errors in large amounts of data usually cancel each other out and do not really impact the regression or stat averages. We can address this issue by increasing, if possible, the amount of data analyzed by going further back in time or expanding the geographical scope, or looking at addtional MLS sources, if they exist.
4. Systemic errors can be at least partially neutralized by understanding and removing them in the final set of comps (as we probably won't have the resources to clean them from all data) - or by discovering whether they cancel each other out (as is sometimes the case). My example is homeowners requesting changes to builder plans, such as removing second-floor areas to create vaulted ceilings or requesting extensions to the living area (e.g., the creation of overhangs), - where these changes never get updated by the official recorder. One reduces GLA, and the other increases it. To gain this kind of experience requires having worked in a given market area for a significant period of time, conducting real statistical analysis.
So, in conclusion, "I" would say that experience should include performing accurate statistical analysis using MARS - and getting good R2 values from cross-validation, using separate training, validation, and possibly test data sets to avoid overfitting. The feedback from model building is where you acquire a real understanding of the vast majority of data errors.
Any appraiser who does not understand overfitting can be assumed to lack knowledge of advanced statistics (MARS) and, therefore, probably lacks complete competence as an appraiser for the given market area.
CAVEAT: My experience is limited to California.
Last edited: