Claude AI Praised by Supreme Court: Is AI’s Legal Role Changing?

Anthropic’s Claude AI is earning high praise for legal analysis, notably from U.S. Supreme Court Justice Elena Kagan, signaling a potential shift in AI’s role in law. Despite this, the technology’s well-documented issues with fabricating information, or “hallucinating,” mean its adoption in the legal field remains cautious.

For years, headlines about AI in the legal world have been overwhelmingly negative, with stories of lawyers facing sanctions for submitting briefs filled with fake cases invented by ChatGPT. These incidents created a narrative that generative AI was a dangerous toy, unfit for the high-stakes environment of legal practice. Yet, a recent comment from one of the highest legal minds in the country has challenged that assumption, prompting a second look at what these tools can do when used correctly.

What Did Justice Kagan Say About Claude?

U.S. Supreme Court Justice Elena Kagan praised Anthropic’s Claude for its exceptional ability to analyze a complex constitutional dispute. Speaking at the Ninth Circuit’s judicial conference in Monterey, California, she highlighted the AI’s performance on a case involving the Confrontation Clause of the Sixth Amendment, a notoriously difficult area of law that guarantees a defendant’s right to cross-examine witnesses.

Her comments were based on experiments conducted by Supreme Court litigator Adam Unikowsky. In a blog post, Unikowsky described how he prompted Claude 3.5 Sonnet to analyze the court’s majority and dissenting opinions in Smith v. Arizona, a recent Confrontation Clause case where Kagan herself wrote the majority opinion. The AI’s analysis was so sharp that it drew a remarkable conclusion from the litigator.

“Claude is more insightful about the Confrontation Clause than any mortal.” — Adam Unikowsky, Supreme Court Litigator

This was not just a generic compliment. It was a specific endorsement of an AI’s ability to grasp legal nuance and reasoning in a field that had previously divided the Supreme Court. It suggests that, beyond simple document summarization, modern AI can engage in sophisticated analytical tasks that were once considered the exclusive domain of human legal experts.

Why Is This Praise Such a Big Deal?

This endorsement is significant because it directly counters the prevailing narrative of AI as an unreliable and hazardous tool for legal professionals. For the past couple of years, the legal community’s primary exposure to AI has been through high-profile failures, such as the embarrassing and professionally damaging incidents where lawyers were caught submitting court filings citing completely fabricated legal precedents generated by ChatGPT.

A common mistake is treating these large language models as databases of fact. They are text predictors, and without proper constraints, they can predict plausible-sounding but entirely false information. For example, in a widely publicized 2023 case, a federal judge sanctioned lawyers after their filing included fictitious cases made up by ChatGPT. These events rightly made the entire legal profession wary of the technology. Kagan’s praise, in contrast, provides a powerful counter-example, showing the technology’s potential upside when applied to analysis rather than factual recall.

In practice, credibility is everything when navigating AI adoption. A single, specific endorsement from a respected figure like a Supreme Court Justice carries more weight than a dozen marketing campaigns. It opens the door for other legal professionals to begin experimenting, albeit cautiously. It shifts the conversation from “Can we trust AI?” to “How can we use AI effectively and safely?” This is a significant step toward mature adoption in any industry, but especially in one as tradition-bound and risk-averse as law.

A close-up shot of a hand wearing a signet ring, using a stylus on a tablet displaying vibrant abstract lines, with books and

The Persistent Problem: AI Hallucinations in Legal Work

The single greatest barrier to widespread AI adoption in law remains the risk of hallucination. An AI hallucinates when it generates confident, plausible-sounding information that is factually incorrect or entirely fabricated. While this can be a minor annoyance when an AI invents a new recipe, it is a catastrophic failure when it invents a legal precedent that could decide someone’s freedom or financial future.

The key issue is that these systems do not “know” when they are lying. They are designed to generate coherent text, and a made-up case name followed by a realistic-sounding legal argument is often more statistically coherent than stating, “I don’t know.” This makes manual verification of every single output non-negotiable. The American Bar Association has even issued formal ethical guidance, reminding lawyers of their duty to ensure the accuracy of any AI-assisted work.

Consider this scenario: a mid-sized law firm is under pressure to meet a filing deadline for a complex corporate litigation case. An associate uses an AI assistant to research precedents, and the tool produces a summary with five supporting cases. Four are real, but one is a subtle hallucination—a perfect fit for their argument, but completely nonexistent. Without triple-checking every citation against a verified legal database like Westlaw or LexisNexis, that fake case could easily end up in a court filing. The result would be immediate damage to the case, potential sanctions for the firm, and a severe blow to their professional reputation. This is the risk that keeps managing partners up at night.

How Can Lawyers and Judges Use AI Safely?

Safe and effective use of AI in the legal field is possible, but it requires a fundamental shift in mindset: treat AI as a brilliant but unreliable research assistant, not as an autonomous legal expert. The human lawyer must always be the final arbiter of accuracy, strategy, and ethical compliance. In practice, what works is confining AI to tasks where its strengths—pattern recognition and text generation—shine and its weaknesses are less critical. Here are a few practical and relatively safe applications for legal professionals today:

  • Document Review and eDiscovery: AI can analyze millions of documents in a fraction of the time it would take a human team. It can identify relevant keywords, concepts, and patterns in discovery materials, helping lawyers pinpoint crucial evidence much faster.
  • Legal Research Brainstorming: As demonstrated by Adam Unikowsky, AI can be a powerful tool for exploring legal theories. You can use it to find arguments related to a specific doctrine, summarize different judicial interpretations, or play devil’s advocate to test the strength of your own position.
  • Drafting Routine Documents: AI can generate first drafts of standard contracts, client communication letters, or internal memos. This saves time, but the key is that a qualified lawyer must review, edit, and approve every word before it is used.

The core principle is to never delegate responsibility to the machine. The lawyer’s professional judgment is irreplaceable. You can use AI to summarize a deposition, but you cannot trust it to tell you if the witness was credible. You can ask it to draft a clause, but you cannot rely on it to ensure that clause is enforceable in your jurisdiction. For a deeper look at the models available, you can explore some of the best AI tools of 2026, but the same rules of verification apply to all of them.

A focused man in a suit and glasses works at a computer in a modern, open-plan office with colleagues and bookshelves in the

What Is the Future of AI in the Legal Profession?

The future of AI in law will not be a sudden revolution but a gradual, cautious integration. The industry is moving away from general-purpose models like the public version of ChatGPT toward specialized, vertically-trained AI platforms. These legal-specific tools are trained on curated and verified datasets of case law, statutes, and legal scholarship, which significantly reduces the risk of hallucinations and ensures the information is relevant to a specific jurisdiction.

U.S. Chief Justice John Roberts acknowledged this potential in a 2023 report, highlighting that AI could one day expand legal services to people who cannot afford a human lawyer. However, he also reassured his colleagues that human judges, with their capacity for judgment, empathy, and discretion, would not become obsolete. The consensus is that AI will augment, not replace, legal professionals. It will handle the tedious, data-intensive tasks, freeing up lawyers to focus on strategy, client counsel, and courtroom advocacy.

Moving forward, the development of clear industry-wide regulations and ethical guidelines will be paramount. Legal bodies are already working on frameworks for the responsible use of AI, covering issues like client confidentiality, data security, and billing for AI-assisted work. The evolution of models may eventually lead to systems that, as some predict for GPT-5, will automatically choose the best AI for your task, perhaps selecting a legally-trained model for one query and a creative model for another. Until then, the responsibility remains squarely on the human user.

Justice Kagan’s praise for Claude does not erase AI’s flaws, but it does mark a turning point. It legitimizes the idea that these tools, when used for sophisticated analysis rather than rote fact-checking, have a serious role to play in the legal profession. For lawyers and legal teams, the immediate action is not to start drafting motions with AI. Instead, it is to begin controlled experiments on non-critical tasks—summarize a recent ruling, brainstorm arguments for a hypothetical case, or analyze a public document. This hands-on, low-stakes experience is the only way to build the skills needed to use these powerful tools responsibly.

For source-backed context and deeper verification, review these references: developers.google.com, developer.mozilla.org.

FAQ

Can AI replace lawyers or judges?

No, AI is not expected to replace lawyers or judges. It is best viewed as a tool to assist with tasks like research and document analysis. Critical thinking, ethical judgment, and client advocacy remain uniquely human skills essential to the legal profession.

What is an AI ‘hallucination’ in a legal context?

An AI hallucination is when the model generates false information but presents it as fact. In law, this can manifest as citing nonexistent court cases, misstating legal statutes, or fabricating quotes from judicial opinions, which poses a serious risk to legal work.

Which AI is better for legal work, Claude or ChatGPT?

Both models have risks of hallucination. However, Claude models, particularly Claude 3.5 Sonnet, often have larger context windows, making them more effective at analyzing long and complex legal documents like contracts or trial transcripts. Verification of all outputs is essential regardless of the tool used.

Are there AI tools built specifically for lawyers?

Yes, several legal tech companies are developing specialized AI platforms. These tools are often trained on curated, verified legal databases to reduce the risk of hallucinations and provide more accurate, jurisdiction-specific information than general-purpose chatbots.