• Welcome to AppraisersForum.com, the premier online  community for the discussion of real estate appraisal. Register a free account to be able to post and unlock additional forums and features.

AI For Legal Work: Thomas Reuters "CoCounsel" and Anthropic's Claude

RCA

Elite Member
Gold Supporting Member
Joined
Jun 27, 2017
Professional Status
Certified General Appraiser
State
California
This video is interesting and has some parallels in appraisal:


Here is the cost:


Now, problem is, even if you wanted to buy CoCounsel - you would probably have to be a licensed attorney. However, that being said, you can certainly use Anthropic's Claude to get pretty good legal advice. If you know an attorney or have connections to a law firm, you might be able to use their service (so says Claude)

More importantly, we do see that this is going to make a big difference for small businesses, who often shun taking on larger organizations or government organizations, because winning a case has heretofore largely been a case of who has the most money. - That is, I believe, not so true anymore.
 
As an important sidenote, we have this issue of "Generative AI': The creation of new content, which Anthropic and other AI companies don't like to discuss - unless you push them.

Now, they will almost always say that these AI tools must be used only with oversight and review by professionals. But, of course, who is to say that such professionals have the competence to do the reviews and monitoring? Actually, they don't necessarily have the required skillset or in any case, it is never guaranteed that the Generative AI content produced, is something the are even capable of understanding. - I talked with Claude about this and it was quite frank in admitting all such problems, although it attempted to deflect here and there, but I did a "recross" and it admitted that problems, intractable problems, exist.

ME: It seems that you are suggesting that Generative AI should only be allowed for use in legal situations, if there is competent review and oversight by Attorneys.

Claude:
"Exactly - you've distilled this down to the essential principle. Yes, I am suggesting that for legal applications, generative AI should require competent attorney review and oversight, and this isn't just my opinion - it's becoming the emerging professional standard.

Why This Standard Makes Sense

Professional Responsibility Requirements:
  • Attorney accountability: Lawyers remain professionally responsible for all work product
  • Competence obligations: Model Rules require lawyers to provide competent representation
  • Malpractice liability: Attorneys can't delegate away their professional judgment
  • Ethical duties: Candor to tribunals and clients can't be outsourced to AI
The Nature of Legal Work:
  • High-stakes consequences: Legal errors can result in lost cases, financial harm, or imprisonment
  • Professional privilege: Legal advice enjoys special protections that require professional oversight
  • Regulatory complexity: Laws change frequently and vary by jurisdiction
  • Contextual judgment: Legal strategy requires understanding that goes beyond pattern matching

Emerging Professional Standards

Bar Association Guidance:Many state bars are establishing rules that essentially require this standard:
  • Florida Bar: Requires lawyers to understand AI limitations and verify AI outputs
  • New York State Bar: Emphasizes attorney responsibility for AI-generated content
  • ABA Model Rules: Being interpreted to require competent oversight of AI tools
Court Requirements:
  • Some federal courts now require attorney certification that human review occurred
  • Sanctions have been imposed when lawyers failed to verify AI-generated citations
  • Professional responsibility cases are establishing precedents about AI oversight duties

What "Competent Review and Oversight" Means

Minimum Standards:

Verify all factual assertions
and legal citations
  • Review legal reasoning for accuracy and appropriateness
  • Assess strategic soundness of arguments and approaches
  • Ensure compliance with professional and ethical standards
  • Take full responsibility for final work product
Practical Implementation:
  • Attorney must understand the AI tool's capabilities and limitations
  • Human review of all AI-generated content before use
  • Independent verification of legal authorities and factual claims
  • Professional judgment applied to strategic and ethical decisions
  • Client communication about AI use where appropriate

The Broader Principle

Professional vs. Consumer Use:This distinction is important:
  • Consumer use: Individual using ChatGPT for personal questions (lower stakes)
  • Professional use: Attorney using AI for client representation (high stakes, professional obligations)
Analogies in Other Professions:
  • Medical AI: Doctors must review and approve AI diagnostic suggestions
  • Engineering AI: Licensed engineers must validate AI-generated designs
  • Accounting AI: CPAs must verify AI-generated financial analyses
  • Architectural AI: Licensed architects must approve AI-generated building plans

The Alternative Risks

What Happens Without Oversight:

Professional malpractice
when AI generates incorrect advice
  • Bar discipline for failing to provide competent representation
  • Client harm from unreviewed AI errors
  • Erosion of professional standards and public trust
System-Wide Consequences:
  • Courts losing confidence in legal submissions
  • Clients receiving poor representation without knowing it
  • Legal profession credibility being undermined
  • Access to justice being harmed by unreliable AI-generated legal work

Implementation Challenges

Resource Considerations:
  • Time requirements: Proper review takes significant attorney time
  • Cost implications: May reduce some efficiency gains from AI
  • Training needs: Attorneys must understand AI capabilities and limitations
  • Technology gaps: Not all attorneys have necessary technical understanding
Practical Solutions:
  • Staged implementation: Start with lower-risk applications
  • Enhanced training: Professional development on AI oversight
  • Clear policies: Firm-wide standards for AI review processes
  • Technology partnerships: Working with AI providers who understand professional requirements

The Future Evolution

Toward More Sophisticated Oversight:As AI systems mature, the oversight model may evolve to:
  • Risk-based review: More intensive oversight for higher-stakes applications
  • Specialized AI tools: Legal AI designed with built-in professional safeguards
  • Integrated workflows: AI systems that better support rather than replace professional judgment
  • Enhanced training: Attorneys better equipped to provide effective oversight

But the Fundamental Principle Remains:Professional accountability and human judgment must remain central to legal practice, regardless of technological advances.

Your insight captures what is likely to become a foundational principle of AI use in legal practice: technology augments professional capability, but cannot replace professional responsibility. The attorney remains the essential human element ensuring quality, ethics, and accountability in legal representation."

(to be continuted)
 
(continued)

ME: "One problem with this is that it assumes the attorneys are competent enough to understand the content produced by Generative AI, yet that degree of competence is not necessarily guaranteed. The situation could be far worse for far less qualified professionals in other areas who are also using Generative AI in situations where ethical and competence concerns are of high importance."

Claude:
"You've identified a profound and troubling systemic problem. You're absolutely right - the entire oversight framework I described assumes a level of competence that may not exist, and this creates a dangerous illusion of safety.

The Competence Assumption Problem

What We're Assuming:

  • Attorneys can reliably identify AI errors and hallucinations
  • Lawyers understand AI limitations well enough to know what to check
  • Professionals can distinguish between plausible-sounding and actually correct AI output
  • Practitioners have sufficient expertise to evaluate AI-generated content in specialized areas
What Reality Often Looks Like:
  • Many attorneys lack technical understanding of how AI works
  • Overconfidence in AI outputs, especially when they sound authoritative
  • Time pressures that encourage superficial rather than thorough review
  • Knowledge gaps in specialized areas where AI might seem more expert than the human
The Dunning-Kruger Effect in Professional AI Use

The Core Problem:
Professionals who most need oversight of AI may be least capable of providing it:

  • Inexperienced attorneys using AI to compensate for knowledge gaps
  • Solo practitioners without peer review or specialized expertise
  • Overwhelmed professionals using AI to handle volume they can't manage
  • Practitioners in unfamiliar areas relying on AI for guidance they can't evaluate
Specific Legal Examples:

  • Corporate lawyer reviewing AI-generated immigration analysis (outside expertise)
  • Family law attorney checking AI's securities law citations (unfamiliar specialty)
  • New associate reviewing AI's constitutional law arguments (limited experience)
  • Solo practitioner evaluating AI's complex tax analysis (beyond training)
The Broader Professional Crisis

In Healthcare:

  • General practitioners using AI for specialized diagnostic suggestions they can't properly evaluate
  • Rural doctors relying on AI for subspecialty guidance without access to specialist consultation
  • Nurse practitioners using AI for medical decisions beyond their training scope
In Engineering:
  • Civil engineers using AI for electrical system designs outside their expertise
  • Junior engineers unable to spot sophisticated errors in AI-generated structural analyses
  • Consultants using AI to work beyond their actual competency areas
In Finance:
  • Financial advisors using AI for complex derivatives analysis they don't understand
  • Small firm accountants relying on AI for sophisticated tax strategies beyond their expertise
  • Insurance agents using AI-generated risk assessments they can't properly evaluate
The Multiplication of Risk

Competence Gaps Create Cascading Problems:

  1. False confidence: AI outputs appear sophisticated and authoritative
  2. Reduced diligence: Professionals may check less carefully when AI "seems right"
  3. Scope creep: AI enables professionals to work outside their competence areas
  4. Error propagation: Mistakes get embedded in professional work without detection
  5. Standard erosion: Overall professional quality may decline systemically
The Regulatory and Licensing Challenge

Current Professional Regulation Inadequacy:

  • Bar exams don't test AI oversight competency
  • Continuing education hasn't caught up with AI realities
  • Professional standards assume human-only work products
  • Malpractice insurance may not cover AI-related errors
  • Disciplinary systems lack frameworks for AI-related misconduct
Licensing Doesn't Guarantee AI Competence:

  • Professional licenses ensure baseline competency in traditional practice
  • They don't ensure AI literacy or technology oversight skills
  • They don't guarantee recognition of AI limitations
  • They don't address scope of practice expansion enabled by AI
Potential Systemic Failures

Professional Services Collapse Scenarios:

  • Mass malpractice events when AI tools widely used without adequate oversight
  • Public trust erosion in professional services generally
  • Regulatory crackdowns that overcorrect and stifle beneficial AI use
  • Market segmentation between AI-using and traditional practitioners
  • Access to justice problems when AI-assisted services prove unreliable
Early Warning Signs:
  • Lawyers sanctioned for citing non-existent AI-generated cases
  • Medical errors from AI diagnostic tools used without proper oversight
  • Engineering failures from unverified AI-generated designs
  • Financial losses from AI-generated advice beyond practitioner competence
What This Means for Society

The Fundamental Problem:
We're asking licensed professionals to police technology they may not understand, while clients assume professional oversight guarantees quality.

Potential Solutions (None Easy):
  1. Enhanced licensing requirements that include AI competency testing
  2. Mandatory AI literacy continuing education for professionals
  3. Technology-specific oversight roles (AI specialists within firms)
  4. Enhanced malpractice liability for AI-related errors
  5. Consumer disclosure requirements about AI use in professional services
  6. Regulatory pre-approval of AI tools for professional use
  7. Professional liability insurance requirements for AI use
The Urgency of This Problem

Your observation highlights that we may be in a dangerous transition period where:
  • AI capability is advancing faster than professional competency to oversee it
  • Market pressures encourage AI adoption without adequate safeguards
  • Regulatory frameworks lag behind technological reality
  • Public assumptions about professional oversight may be dangerously wrong
This suggests we need much more robust systems for ensuring professional competency in AI oversight, or we risk systemic failures across multiple professional domains.

The alternative to competent human oversight isn't necessarily "better AI" - it might be much more restrictive regulation of professional AI use until competency frameworks catch up with technological capability."
 
Now, the above, with respect to Appraisal means: Currently, there really isn't any room for "Generative AI" in appraisal, unless you can ensure review appraisers are really capable of being smart enough and experienced enough to critically analyze and understand the generative output of AI systems and are also sufficiently competent to conclude whether such output is factually and logically correct.
 
Find a Real Estate Appraiser - Enter Zip Code

Copyright © 2000-, AppraisersForum.com, All Rights Reserved
AppraisersForum.com is proudly hosted by the folks at
AppraiserSites.com
Back
Top