• Welcome to AppraisersForum.com, the premier online  community for the discussion of real estate appraisal. Register a free account to be able to post and unlock additional forums and features.

Three days in a row. Different GLA than advertised.

M.A., I realize that you didn't post the data for this reason, but if possible please advise: 1) would the underlying formulas still apply if the net and gross adjustments were different? Also, how was the Comparable Weight Percentages determined? Thank you.
The underlying formula still applies no matter what the net and gross adjustments are. Here it is –

A = 100 x [(CompY Gross Adjustment %) / (Total Gross Adjustment %)]
B = 100 – A
C = Total number of comps – 1

Weight of CompY = B / C
 
With current form, I don't look initially at the net and gross adjustments when I reconcile.
When I need reasons or excuses to justify how I weigh the comps, I look at the net and gross adjustments as validation.
I assume my peers do that too.
How do you justify your decision that was already made, if the net and gross adjustments fail to support the weight you assigned [even if not quantitatively] the comparables? [Asking Fernando.]
 
Last edited:
It's like she inserts a "therefore you really mean this" instead of accepting what is actually said.

The original question: AB, I usually consider the width of the adjusted value range as a critical factor, i.e., the more narrow it is, the more obvious that the price point is. Question: is there a mathmatical expression to describe the width other than in absolute dollars, e.g., from $xxxx.o to $zzz.o, or $.....00? Can the width be described in relative terms, perhaps a ratio of some sort, or a variance percentage?

My answer: Coefficient of Variance, which is a statistical measure of dispersion applicable to any range.

I have know idea how she got from point A to the alternate universe she is currently arguing in.
Does the Coefficient of Variance pertain to a range of data that was determined by qualitative as well as quantitative means? If so, is a caveat usually required to explain that as a limitation? What I mean is that the adjusted values typically aren't determined entirely by quantitative methods; and if not, is the quality of the range still described by the coefficient of variance--which I "think" is the absolute answer to the question I orignally asked....
 
AI Overview



"Alamode weighting of comps"
refers to the automatic weighting system in the a la mode appraisal software (TOTAL), which assigns a higher weight to comparable properties (comps) that have had fewer adjustments made to them. This system is designed to give more importance to comps that are more similar to the subject property, and the weights are dynamic, meaning they change as you add, remove, or adjust other comps.
  • How it works: The software assigns a weight based on the level of adjustment needed for each comp. A comp that requires minimal adjustments will receive a higher weight, while a comp that needs many adjustments will receive a lower weight.
  • Automatic and dynamic: The system automatically calculates these weights, ensuring they always add up to 100%. If you add a new comp or adjust an existing one, the weights of all other comps will adjust automatically to maintain the 100% total.
  • Purpose: This weighting provides a quick way to understand the relative similarity of your comps. It helps focus on the most similar properties without manual calculations or a weighted average formula, as the software does this automatically.
  • Analogy to traditional appraisal: This is a digital equivalent to the traditional appraisal process where an appraiser gives more credibility to comps that are more similar to the subject property, often by making fewer adjustments to them.
I have always relied upon ACI but I'm unaware of any similar weighting process. Do any ACI users know if it is available?
 
I think that could be a major mistake.
Obviously I'm way out of my league in this thread, but I often wonder how appraisers define the word "consider." Like the comment "all comparables were given equal consideration." Lot of appraisers feel the appraiser thinks that they are all similar/identical, but to me it means that "equal consideration" means "equal thought" was given to each comparable, with a forthcoming decision pending, not that they are "considered" to be identical. [Maybe a downright friviolous issue...]
 
In most cases, you must assign a point value as of the effective date. But nothing prevents the appraiser from also providing a range value. And, quite frankly, beyond providing a range for the effective date, you could also offer an interesting "most probable" value going forward, assuming average maintenance. You could also offer some alternate future values given certain black swan events. You could provide numerous projections and conditional ranges, which would likely leave the client in disbelief regarding the actual value. Of course, the accuracy of your predictions is significantly reduced as you move further into the future. - That's why the lenders and GSEs should require periodic home inspections every 8-10 years or so, especially if refinancing is required, and make sure that refinancing covers any needed repairs and maintenance, as under that assumption, the properties can be expected to hold their value better.
Prospective opinions always intrigue me because they gotta be inherently more valuable than contemporaneous opinons. Obviously the downside is the inability of any effort to predict the future, but do any readily available systems have the ability to do so, perhaps with a margin of error that could be integrated into the projected future value, especicially if the number of potential factors that might influence the future result that is finite could be weighted???
 
Prospective opinions always intrigue me because they gotta be inherently more valuable than contemporaneous opinons. Obviously the downside is the inability of any effort to predict the future, but do any readily available systems have the ability to do so, perhaps with a margin of error that could be integrated into the projected future value, especicially if the number of potential factors that might influence the future result that is finite could be weighted???
When we delve into data mining, instead of "margin of error," we tend to discuss R2 values more. But still, our MARS graphs breakdown trends (value contribution increase/decrease) by ranges, and it is possible to calculate a margin of error over the different ranges - and in fact if you look at some of the graphs I have presented on this forum such as the recent one "Distance from the Ocean" for Sea Ranch, you will see a background color that indicates a range where I think 50% or so of the values fall. Offhand, I don't know the details for the MARS calculations.

What we would likely do for projections into the future is make them under different assumptions, and leave it to the reader to decide how much weight he wants to give to the "assumptions" being true. In fact, the assumptions might be quite variable: Imagine an interactive graph that looks like a normal curve by default, where you can go out pick a value like years into the future and then push the probability up or down, where the probability value is : Accumulative Probability of 7.8+ earthquake by given years into the future. And you might have such a graph for half-a-dozen different magnitude values 7.0, 7.5, 8.0, 8.5, 9.0+.
 
When we delve into data mining, instead of "margin of error," we tend to discuss R2 values more. But still, our MARS graphs breakdown trends (value contribution increase/decrease) by ranges, and it is possible to calculate a margin of error over the different ranges - and in fact if you look at some of the graphs I have presented on this forum such as the recent one "Distance from the Ocean" for Sea Ranch, you will see a background color that indicates a range where I think 50% or so of the values fall. Offhand, I don't know the details for the MARS calculations.

What we would likely do for projections into the future is make them under different assumptions, and leave it to the reader to decide how much weight he wants to give to the "assumptions" being true. In fact, the assumptions might be quite variable: Imagine an interactive graph that looks like a normal curve by default, where you can go out pick a value like years into the future and then push the probability up or down, where the probability value is : Accumulative Probability of 7.8+ earthquake by given years into the future. And you might have such a graph for half-a-dozen different magnitude values 7.0, 7.5, 8.0, 8.5, 9.0+.
..so a client would review various what-if scenarios and select the option that is compatable with their willingness to be exposed to risk, i.e., to make an unprofitable decision, relative to the potential profit??? [And once the program is established the datadude would have little to do except count the money!!!!]
 
Find a Real Estate Appraiser - Enter Zip Code

Copyright © 2000-, AppraisersForum.com, All Rights Reserved
AppraisersForum.com is proudly hosted by the folks at
AppraiserSites.com
Back
Top