• Welcome to AppraisersForum.com, the premier online  community for the discussion of real estate appraisal. Register a free account to be able to post and unlock additional forums and features.

The End of *most* Full Time Appraiser's ?

Take a step back and think about what they're investigating. And how that may effect the current situation with gse's and AMC's. Is that a new position?

Like I’ve said many times here, Bagott is a skilled writer who, much of the time, tries to connect imaginary dots.
 
Like I’ve said many times here, Bagott is a skilled writer who, much of the time, tries to connect imaginary dots.
Bagott has a very small cult like group of very old appraisers who he gives hope to. But he's really not reaching anyone who could make change happen.

But on the other hand maybe things are going in the right direction but just not how appraisers want it to be.

Baggot needs to retire and or move on to writing about things that are relevant.

Even his introductions of being a
former newspaper man takes one back to hearing Paul Harvey as a kid with the famous-- and now for the rest of the story..lmao
 
My question for you is who are you marketing this too? I've sold advanced statistics packages on market data and it does pay well but it is a small market in my area. I've done a few for the State of Michigan and a few for a local county.

I don't consider it marketing.

SO, in my opinion, there is a ton of rubbish with regard to so-called Appraisal Statistics. Absolute rubbish and nonsense perpetrated by some well known MAIs and others in the appraisal field who teach statistics for appraisers or voice their opinions in articles, books, podcasts and videos. -- It seems they take advantage of the fact that even the simplest statistics is beyond the reach of most appraisers, and they get very reckless in their statements, as they feel safe in what they consider is a knowledge level superior to most other appraisers.

Their day is coming to an end. Just about anyone can just take their statements, even articles and books to some Chat Box and get an objective and even detailed opinion as to their conceptual correctness and relevance. It is not just my opinion. It's the opinion of a massive computer system that has digested hundreds, or even thousands of relevant publications, that has itself been trained on multiple tests, and also has highly advanced reasoning and mathematics engines at its disposal.

The day of MAI charlatans comes to an end. You would think they realized that by now. But that is apparently not the case, these idiots stubbornly proceed a bit longer.

There is a group of these charlatans stemming from some oldies in the field. They all pat each other on the back and say they have done a good job. Well, "good" things don't last forever.

Be careful who you support. Think twice before giving a thumbs up --- because you may dirty yourself as a result. That applies to everyone. I made that mistake long ago, in using someone as reference in a presentation -- because I simply couldn't find any appraiser to quote as a reference and thought I needed one. I should have just foregone using the damned idiot. But, I thought I needed some kind of reference. That someone is still around and selling the same rubbish.

However, things are progressing very fast now, everything moves ahead. The charlatans need to be dumped. The sooner, the better.

The latest thing we need is MAIs who claim to understand AI and Statistics showing off what they think are strokes of genius on LinkedIn. Then I make a negative comment, which rarely makes it beyond their private eyes and next thing you know, their bright idea is pulled off LinkedIn, or whatever. Well, that's a good thing.

One could argue, that presenting fallacious interpretations and incorrect concepts is better than nothing. Well sure, if they don't mind criticism, I'd agree.

I am marketing, if anything, removal of incompetency. In appraisal, that is a big job. I am constantly moving forward to Valuation Engineering - and hoping for even more progress.

What is the purpose of all this? Well, it is what it is. You can make up your own mind. - I will keep doing the same thing in trying to improve valuation. Somewhere or another, what I actually do changes with time.

I draw a line between those who have some hope of progressing further and themselves improving things and those who are tied up in the past, supporting their status quo.

Here is a very good book to read. See if you can really understand the nuances:

Clayton, Aubrey. Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science (pp. 97-100). (Function). Kindle Edition.

 
Last edited:
Like I’ve said many times here, Bagott is a skilled writer who, much of the time, tries to connect imaginary dots.

just because you are in denial...doesn't mean we cant see :rof:
 
In a nutshell, "Bernoulli's Fallacy" is about this kind of event (although there is far more to "Bernoulli's Fallacy" - and this particular example has a more specific call-name now due to it's context, although it is an example of "Bernoulli's Fallacy":

THE PROSECUTOR’S FALLACY Other unfortunate examples of the same type of logical error are found in courtroom arguments that a suspect under trial must almost certainly be guilty because the circumstances of the case are so unlikely. Sally Clark was a fairly affluent woman who had worked in banking in London before training as a solicitor and joining a law firm in Manchester, England, in 1994. In September 1996, she gave birth to an apparently healthy baby boy who died suddenly less than three months later. Clark had been alone with the child at the time, and her claim was that he had fallen unconscious and stopped breathing shortly after she put him to bed. Following the incident, she fell into a deep depression, sought out counseling, and was in recovery when she had another baby boy, this one three weeks premature, in November 1997. Tragically he also died within eight weeks of being born under circumstances similar to those of her first child. Notably the second infant showed some signs of trauma, which Clark explained were likely caused by her attempts to resuscitate him before paramedics arrived or by the attempts of the paramedics themselves. She and her husband, Steve, were both arrested in February 1998, and Sally was charged with two counts of murder (the charges against Steve having been dropped). During the trial, the fact that it was extremely unlikely for a pair of infant deaths to happen to such a family as a result of SIDS—that is, to happen by chance—was presented as a key piece of evidence. The pediatrician Roy Meadow, formerly a professor at the University of Leeds and inventor of the term Munchausen syndrome by proxy, gave testimony that the chance of two children from an affluent English family dying from SIDS was something like 1 in 73 million. He colorfully compared this to the chance that an 80-to-1 longshot at the Grand National horse race would win four years in a row.24 As he opined in his book on the subject, ABC of Child Abuse (in what came to be known as Meadow’s law): “One sudden infant death in a family is a tragedy, two is suspicious and three is murder unless proven otherwise.”25 Based largely on this testimony and the idea that “lightning doesn’t strike twice,” Clark was convicted and sentenced to life in prison. The press coverage at the time reviled her as a child murderer. Her husband, also a solicitor, quit his job to focus on her appeal. By combing through the prosecution’s records, they found that the pathologist who testified about the results of medical exams on the second child had withheld key evidence from the jury—specifically, that tests for a bacterial infection of the cerebrospinal fluid had come back positive. On the basis of these findings, her conviction was overturned in January 2003, after she had spent more than three years in prison. Meadow’s statistical testimony was also widely criticized. His figure of 1 in 73 million was based on an estimate that the chance of a single child dying from SIDS in any given family similar to the Clarks was 1 in 8,543; from there, he reasoned that the chance of two such deaths in a given family would be 1 in 8,5432, or 72,982,849. This line of reasoning assumes the two events to be independent, though, so the probability of a second child dying is unaffected by the conditional assumption of a first child having died. This assumption would be negated by the possible presence of any common cause within the family, such as a genetic condition or an environmental health issue. In October 2001, the Royal Statistical Society issued a statement criticizing Meadow’s independence assumption: “There are very strong reasons for supposing that the assumption is false. There may well be unknown genetic or environmental factors that predispose families to SIDS, so that a second case within the family becomes much more likely than would be a case in another, apparently similar, family.”26 Also, Meadow’s bizarrely precise initial figure of 1 in 8,543 came from a study commissioned by the British Department of Health and was the result of adjustments applied to the overall incidence rate of SIDS at the time—about 1 in 1,300—based on certain factors that were known about the Clark family: they were an affluent couple in a stable relationship, Sally was over 26 years old, and the Clarks were nonsmokers, all of which were known to decrease the likelihood of SIDS. Critics such as mathematics professor Ray Hill at Salford University pointed out Meadow had overlooked factors that would increase the likelihood of SIDS for the Clark family, including that both children were boys.27 All of these points of criticism were important and well founded, but the single greatest problem with Meadow’s testimony, and what should have been presented vociferously in Sally Clark’s defense, was that he had been computing the wrong probability. That is, in our language Meadow had been focused only on the sampling probability of a given event—the event of two apparently otherwise healthy children in the same family dying suddenly in infancy—when he should have been considering the inferential probability for his hypothesis that the two children had been murdered. He argued, under the alternative hypothesis that they were well taken care of, that this care would have made their deaths incredibly unlikely, and he used this as evidence that the hypothesis itself was unlikely. But this is just like the base rate neglect examples given earlier. Two children dying in infancy by whatever means is already an extremely unlikely event, but that is the data observation that we must condition on. The whole landscape of our probability assignments needs to change to reflect the fact that, by necessity, we are dealing with an extremely rare circumstance. And the prior probability we should reasonably assign to the proposition “Sally Clark murdered her two children,” determined before considering the evidence, is itself extremely low because double homicide within a family is also incredibly rare! Included among the inferences, we should also note that some of the factors that make a couple like the Clarks less likely to have a child die from SIDS also lower our assignment of the probability that they are murderers. Carrying through a Bayesian inference (that also corrected for the flawed reasoning in Meadow’s sampling probability), Hill estimated in an article for the journal Pediatric and Perinatal Epidemiology that the posterior probability for the SIDS hypothesis was somewhere between 70 and 75 percent. That is, a low sampling probability did not make the SIDS hypothesis an unlikely explanation for the deaths of the two children; it actually made it the significantly more likely explanation. The judges of the appellate court noted that Meadow’s calculations had been predicated on a number of questionable assumptions, none of which had been made clear to the jury. Furthermore, they observed that “we rather suspect that with the graphic reference by Professor Meadow to the chances of backing long odds winners of the Grand National year after year it may have had a major effect on [the jury’s] thinking notwithstanding the efforts of the trial judge to down play it.” Following Sally Clark’s successful appeal in 2003, the attorney general ordered a review of all similar cases, and two other women convicted of murdering more than one of their own children, Donna Anthony and Angela Cannings, had their convictions overturned. A third, Trupti Patel, whose trial for the murder of three children was ongoing at the time, was acquitted. In all three cases, Meadow had testified as an expert witness that the chances of a family suffering multiple deaths from SIDS was vanishingly small. After a hearing in 2005, the British General Medical Council struck Meadow from the British Medical Register for professional misconduct, though he was reinstated the following year after he appealed the decision to the country’s High Court. His comeuppance, such as it was, had come too late for Sally Clark, though. People close to her said she never recovered from the traumatic experience of being wrongfully blamed for her children’s deaths, and she was found dead of alcohol poisoning in her home in 2007. In legal circles, the argument Meadow presented in these cases—that, under an assumption the suspect is innocent, the facts of the case would be incredibly unlikely, and, therefore, the suspect is unlikely to be innocent—is known as the prosecutor’s fallacy. A famous and oft-cited example is the 1968 case of People v. Collins, which involved a pair of suspects, Malcolm and Janet Collins, who were arrested in Los Angeles for robbery based on matching certain characteristics given by eyewitnesses to the crime: that he was an African American man who may recently have had a beard and mustache, that she was a blonde woman who normally wore her hair in a ponytail, and that they drove a partly yellow car. A mathematics instructor at the nearby state college testified at the trial that the probability of a randomly chosen couple matching all the given characteristics was 1 in 12 million, based on multiplying the estimated probabilities supplied by the prosecution .

Clayton, Aubrey. Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science (pp. 97-100). (Function). Kindle Edition.
 
Geez the Flux Capacitor Dude's rants are turning into pages of AI Claudia or ChatGpt thesis like ramblings.

Like a mad professor in the basement of Herman Munsters house. I think he's having a nervious breakdown because nobody wants to purchase his regression tools. Lol
 
For people who rely on chatbots for analysis, particularly Grok…

Elon Musk's Chatbot Says There's a Strong Chance Trump Is 'Russian Asset'​

According to the AI chatbot called Grok, which was developed by Elon Musk’s company xAI, there is a “75-85% likelihood” that the person who delivered the State of the Union address on Tuesday night is a “Putin-compromised” Russian asset.

In describing Grok, by the way, Musk said it is a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically-correct.”

 
For people who rely on chatbots for analysis, particularly Grok…

Elon Musk's Chatbot Says There's a Strong Chance Trump Is 'Russian Asset'​

According to the AI chatbot called Grok, which was developed by Elon Musk’s company xAI, there is a “75-85% likelihood” that the person who delivered the State of the Union address on Tuesday night is a “Putin-compromised” Russian asset.

In describing Grok, by the way, Musk said it is a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically-correct.”

Yeah and I'm one too...lol
 
For people who rely on chatbots for analysis, particularly Grok…

Elon Musk's Chatbot Says There's a Strong Chance Trump Is 'Russian Asset'​

According to the AI chatbot called Grok, which was developed by Elon Musk’s company xAI, there is a “75-85% likelihood” that the person who delivered the State of the Union address on Tuesday night is a “Putin-compromised” Russian asset.

In describing Grok, by the way, Musk said it is a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically-correct.”

That explains why Trump didn't put any tariffs on Russia.
 
Professor You lost me after you referenced "Bernoulli's Fallacy"So I used AI to educate me a little bit.

Bernoulli's Fallacy refers to a fundamental misunderstanding in probability and statistical inference that has influenced modern science. The term is explored in depth by Aubrey Clayton in his book Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science.


The fallacy originates from a misinterpretation of probability, dating back to the work of Jacob Bernoulli, a 17th-century mathematician. It highlights how traditional statistical methods often fail to incorporate prior knowledge when making inferences, leading to flawed conclusions. This issue has contributed to the reproducibility crisis in scientific research and has implications in fields like medicine, law, and public policy.
 
Find a Real Estate Appraiser - Enter Zip Code

Copyright © 2000-, AppraisersForum.com, All Rights Reserved
AppraisersForum.com is proudly hosted by the folks at
AppraiserSites.com
Back
Top