• Welcome to AppraisersForum.com, the premier online  community for the discussion of real estate appraisal. Register a free account to be able to post and unlock additional forums and features.

FYI: MARS vs CNNs & Transformers

RCA

Elite Member
Joined
Jun 27, 2017
Professional Status
Certified General Appraiser
State
California
This may be of interest to a few of you:

MARS is the absolute best tool for SCA. It is not just regression, it has striking resemblances to both Convolutional Neural Networks (CNN) used in popular AI image processing systems like ResNet and Transformers used in most LLMs such as ChatGPT, Grok-3 and Claude.

MARS is better for appraisal because the models it creates can be explained and discussed and understood by humans. CNN and Transformers simulate neurons and neural paths in the brain, in particular the growth of new neural pathways through training. You hear exclamations, such as by people designing these system that they don't really understand why they work as good as they do:


Anyway:

From Grok-3:


From ChatGPT-4o:

Question: Is there any similarity between MARS regression and Transformer algorithms?

Interesting question! On the surface, MARS (Multivariate Adaptive Regression Splines) and Transformers might seem worlds apart—they come from different eras and are used for different purposes—but they do share a few high-level ideas. Here's a breakdown:

What They Are:

  • MARS is a non-parametric regression technique that models relationships by automatically creating piecewise linear splines (hinge functions) to fit data flexibly.
  • Transformers are deep learning architectures designed primarily for sequence modeling, especially in natural language processing. They use self-attention mechanisms to capture dependencies between inputs.

Conceptual Similarities:

ConceptMARSTransformers
Adaptive modelingChooses basis functions (splines) adaptively based on dataLearns weights/representations adaptively via attention
NonlinearityModels nonlinear relationships using hinge functionsModels complex nonlinear interactions using deep layers and attention
Feature interactionInteractions can be added between featuresAttention layers model pairwise interactions between tokens/features
Data-driven structureThe model structure (number and placement of splines) is learned from the dataThe attention mechanism learns which parts of the input to focus on, based on the data

❗️Key Differences:​

AspectMARSTransformers
PurposeRegression, predictive modelingSequence modeling, language understanding
Model typeClassical, statistical modelDeep learning neural network
InterpretabilityHighly interpretableOften a black box (though explainable AI tools exist)
Data typesTabular, structured dataSequential, text, audio, image patches, etc.

Similar Vibe:

Both MARS and Transformers adapt to the data rather than sticking with a fixed model structure. You could say they both "learn how to learn the shape of the data"—but the mechanisms and domains are very different.

Want me to illustrate this with a concrete toy example or code snippet?
 
Last edited:
Find a Real Estate Appraiser - Enter Zip Code

Copyright © 2000-, AppraisersForum.com, All Rights Reserved
AppraisersForum.com is proudly hosted by the folks at
AppraiserSites.com
Back
Top