• Welcome to AppraisersForum.com, the premier online  community for the discussion of real estate appraisal. Register a free account to be able to post and unlock additional forums and features.

How Many Have Figured Out

Status
Not open for further replies.
Realizing that the price transaction in real estate markets involve buyer, seller, agents, appraisers or a combination of all that can be involved in the transactions. This accounts for price dispersion on transactions of model matches.

These parties to the transaction are the biggest variable of all variables and constitutes market imperfections. It is assumed in all transactions there is a competitive market with buyer and seller equally motivated and equally knowledgeable of all the facts in the market.

Participants in real estate markets often have incomplete information about the attributes of the purchase, and decisions to buy and sell must often be made based on this partial knowledge. Real estate markets are not homogeneous, they are heterogeneous. Transactions are decentralized, and market prices are the outcome of pairwise negotiations. The completed transactions are not timely and therefore models are missing completeness of current data as well as incomplete information on parametric as well as non-parametric data.

R2 is not the end all to be all in the value determination.

There is something to be said for human judgement and something to be said for statistical tools and computers. A computer can integrate dozens and dozens of factors in a way that no person can; yet they cannot walk into a house look around and take in the entire surroundings, the aesthetics, the quality, walk from room to room and blend everything into one whole, blend the outside environment and neighborhood.

Yet, of the many influences on price, many are really beyond human capability, such as:
1. The influence of interest rates, with time lags. Interest rates have a big impact on what a person can pay and those time lags have to be taken care of.
2. The integration of a number of factors that go into predicting where interest rates are headed.
3. Innumerable market conditions.
4. Good estimates of how prices have changed in the past year or two to make accurate adjustments.
5. Good estimates of adjustments for many of the quantitative variables such as GLA.
6. Good estimates of adjustments between comps for qualitative variables such as whether the home as a water well or is connected to the city water supply, whether it has a view of the ocean, marina, harbor, .....
7. Interactions between variables, confounding and co-linear relationships.
8. Other.

Yes, I'm familiar and acknowledge the "noise" issues caused by lack of competence and honesty among sellers and buyers. But my experience is that most of the variance in prices that cannot be explained by standard variables seems to be pretty well explained by those variables subject to subjective judgement:

The noise issues cancel each other out in regression. That is to say, they are not related to other factors. They don't effect the equations. The noise gets captured in the residuals of the initial model. So the residuals are measures of the value contribution of the variables not captured by the initial model, i.e. the subjective stuff like Quality of Construction, Condition, plus noise. If you can somehow measure the "noise" fine - eliminate it. If you can't it gets folded into the score for the quality of a home. Most of this problem can be eliminated by removing probate, short, REO sales and the like. You might even eliminate sales from certain agents you know are crooks -because that data is in the MLS. You can clean the data to no end. What you will find however, at least in the market up here, is that real issues far outweigh the noise in determining sales price. We can largely disregard the noise, and assume it is going to cost us 1% in accuracy +/-.
 
Fundamentally, appraisers, among others suffer from tunnel vision. A guy picks of a rifle to do some shooting practice. He misses the target every time. Someone else can say, "Your sure not very accurate with a rifle". That kind of implies most other people would be more accurate shooting the same target - and we might presume the target is 50 yards away or thereabouts. If the target is 1000 yards away, you might way "Your accuracy at 1000 yards is not very good." In the latter case, there is really is no implication, not many people can shoot that accurately at 1000 yards. Of course, that depends on the size of the target. So you can qualify the whole thing: "When it comes to shooting an 2x2 foot target at 1000 yards you're accuracy is terrible." So, in this case what constitutes accuracy? Certainly you can answer that question yourself. So, that boat thing is kind of like a target the size of an 2x2 foot target 1000 yards off. What ever value you come up with with a given exposure time, if it means anything at all, you wouldn't want to bet your life on.

That is to say: "My opinion of value on those two boats is $5,000 with an exposure time of 100 years." You could bet your life on it for sure. But the statement is meaningless. So now, now the question is what constitutes "meaningless"? Someone in the crowd yells: "When it is from an appraiser who thinks buyers make their decisions the same way appraisers do!" - And this is particularly poignant when you consider that most appraisers, at least those in California, are too poor to buy a house and have never been through the process. .... Sigh

And now that brings up the next question: What percentage of residential appraisers in California OWN homes or condos? Now that is a good exercise in measurement. How would you go about figuring this out? I would suggest using statistics to save yourself some time. How many observations do you need for a good estimate? Say you decide all you need is 30 random observations. You decide to choose 30 Certified Residential Appraisers at random and figure out if they own homes or condos. Is there a reliable way to do that?

My rough guess, for what it is worth is that, in California, somewhere over 50% (probably much higher) of commercial appraisers who have been in the business for over 10 years, own homes or condos and 20-30% of certified residential appraisers in the business for over 15 years own homes or condos. It would be interesting to see how far off I am.
Why in the world would you quote my inquiry about the accuracy of an opinion, and then write all that? (Which does not address the inquiry)

Long range precision shooting is a hobby - so I am happy to discuss all the variables that go into successful engagement at 1,000+ yards. I just don’t see how that has anything to do with the accuracy of an opinion. Cheers
 
Last edited:
Why in the world would you quote my inquiry about the accuracy of an opinion, and then write all that? (Which does not address the inquiry)

Long range precision shooting is a hobby - so I am happy to doiscuss all the variables that go into successful engagement at 1,000+ yards. I just don’t see how that has anything to do with the accuracy of an opinion. Cheers

You asked "What constitutes and accurate opinion", that is what I addressed with the same accuracy as your question.
 
You asked "What constitutes and accurate opinion", that is what I addressed with the same accuracy as your question.
With respect, you failed to address my inquiry at all, and seem to have missed (or ignored, by choice) the key point. Rather, you took a detour to unrelated (though interesting) topics. Now, had you addressed my opinion as to whether or not I could successfully engage a target at 1,000 yards, that would have relevance.

Impacts on targets are data points whose accuracy and precision can both be measured. Value, on the other hand, is an economic principle. It is always an opinion and never a fact. So, discussion of the “accuracy” of a value opinion is somewhat meaningless. That is why USPAP intentionally uses credibility, not accuracy, as the measuring stick.

Years ago, when the ink on my diploma was still fresh, and my head was full of all those courses that led to in my math degree, I too thought in terms of the “accuracy” of my valuations. Thankfully, I overcame that mindset.
 
A computer can integrate dozens and dozens of factors in a way that no person can;

And yet a computer is dependent on humans to not put garbage in (non-vetted data), why?

The noise issues cancel each other out in regression.

The assumption is that all other variable not under test are equally represented in all variables under test which is not true. The difference between the observed value of the dependent variable and the predicted value is called the residual. Residuals are not all equal, typically, but the sum of all residuals equals zero. Unfortunately, a high R2 value does not guarantee that the model fits the data well. Use of a model that does not fit the data well cannot provide good answers to the underlying engineering or scientific questions under investigation.

Numerical methods for model validation, such as the R2 statistic, are also useful, but usually to a lesser degree than graphical methods. Graphical methods have an advantage over numerical methods for model validation because they readily illustrate a broad range of complex aspects of the relationship between the model and the data. Numerical methods for model validation tend to be narrowly focused on a particular aspect of the relationship between the model and the data and often try to compress that information into a single descriptive number or test result.
 
And yet a computer is dependent on humans to not put garbage in (non-vetted data), why?



The assumption is that all other variable not under test are equally represented in all variables under test which is not true.
I think you have to rephrase that. It doesn't make any sense.

The difference between the observed value of the dependent variable and the predicted value is called the residual. Residuals are not all equal, typically, but the sum of all residuals equals zero.
Yep.

Unfortunately, a high R2 value does not guarantee that the model fits the data well.
Hmmm. I'm guessing you read something like this somewhere and are repeating it. It probably comes from a discussion of parametric regression on a population represented by sample data and is talking about whether the model produced from the sample accurately represents the model for the population in an unbiased way.

We are in the non-parametric world. And it is very possible that your sample is the same as your population. With non-parametric regression a model is created that explains the variation in your data - or your population. On each run it has a random starting point and juggles things around until it can no longer make a noticeable improvement to the model. The R2 value is the percentage of variance the model accounts for in the data. So, if R2 is 100%, then it accounts for all variance -you have a perfect fit. So, in our world, non-parametric statistics/regression, the R2 is a valid measure of how well the model fits the data. If the set of sales being analyzed is the population, there is no question of bias. It is what it is.


The R2 is based on the some squares of the errors between the observed and the predicted.
Use of a model that does not fit the data well cannot provide good answers to the underlying engineering or scientific questions under investigation.

Well it all depends. In some cases, statisticians live in a world where they do not expect R2's above 0.4 - and are quite happy. It all depends on what you are doing.
I like high R2s when modeling the objective/tangible features of a home - because it reduces the error of my subjective judgement. If my R2 is 0.80, then I have to subjectively estimate the remaining 20% of the value for the subject property. If I am off by 10%, then it creates just a 2% error in my final opinion of value - well within 5% tolerances. So, that R2 - is extremely important for me, based on the way I do things.

Numerical methods for model validation, such as the R2 statistic, are also useful, but usually to a lesser degree than graphical methods. Graphical methods have an advantage over numerical methods for model validation because they readily illustrate a broad range of complex aspects of the relationship between the model and the data. Numerical methods for model validation tend to be narrowly focused on a particular aspect of the relationship between the model and the data and often try to compress that information into a single descriptive number or test result.
Graphs are just visualization aids for the functions created. In fact, I typically create C# programs on the functions to calculate the adjustments between the comps and the subject (the easiest way for me). Others probably use Python or Excel - whatever they find easiest.
 
With respect, you failed to address my inquiry at all, and seem to have missed (or ignored, by choice) the key point. Rather, you took a detour to unrelated (though interesting) topics. Now, had you addressed my opinion as to whether or not I could successfully engage a target at 1,000 yards, that would have relevance.

Impacts on targets are data points whose accuracy and precision can both be measured. Value, on the other hand, is an economic principle. It is always an opinion and never a fact. So, discussion of the “accuracy” of a value opinion is somewhat meaningless. That is why USPAP intentionally uses credibility, not accuracy, as the measuring stick.

Years ago, when the ink on my diploma was still fresh, and my head was full of all those courses that led to in my math degree, I too thought in terms of the “accuracy” of my valuations. Thankfully, I overcame that mindset.

Accuracy is broadly defined. What constitutes accuracy actually could take a book. And there are in fact books written on it. Sorry.
 
Bert, you have yet to post anything that shows your method of nonparametric regression. Why is that?
 
My calculator is accurate. But I aspire for my appraisal conclusions to be considered reasonable and credible.

I love the USPAP definition of "credible"! Per USPAP, the definition is: "Worthy of belief". [USPAP Comment: "Credible assignment results require support, by relevant evidence and logic, to the degree necessary for the intended use."]

It really gets to the SOW. How much accuracy is needed? 5% so say my peers. So, if you know your stuff, you want all of your adjusted comps to be within 5% of the value conclusion. That is the benchmark, so say my peers.

Then, the question is it reasonable to say your adjustments are simply based on "historical records?"

No say my peers teaching AI classes.

OK.
 
Status
Not open for further replies.
Find a Real Estate Appraiser - Enter Zip Code

Copyright © 2000-, AppraisersForum.com, All Rights Reserved
AppraisersForum.com is proudly hosted by the folks at
AppraiserSites.com
Back
Top