AI Ethics

AI Malpractice Risk for Lawyers: Sanctions, Insurance Gaps, and the New Standard of Care

Two years ago, a chatbot hallucination cost a lawyer $5,000. Last summer, it cost three attorneys their case, their reputation, and a bar referral.

Alexander Cohan, Ph.D.

Alexander Cohan, Ph.D.

Computational scientist with a Ph.D. from UC Irvine and peer-reviewed research in NLP, deep learning, and large-scale data modeling. Over a decade of experience building systems that process complex document sets at scale. Founded Hintyr to bring defensible AI workflows to litigation teams navigating document review, redaction, and production.

Key Takeaways

  • Courts have sanctioned dozens of attorneys since 2023 for unverified AI-generated citations, with penalties escalating from fines to suspensions and bar referrals.
  • Not using AI may eventually become a competence issue, but no court has sanctioned a lawyer for this yet.
  • Malpractice insurance coverage for AI-related claims is narrowing. Some carriers now exclude AI claims entirely.
  • Document your AI verification process. Having a policy matters, but enforcing it matters more.
  • Every AI-generated citation must be read in full, verified against its original source, and confirmed as still good law.
Courthouse with digital AI overlay representing AI malpractice risk for lawyers
AI-generated citations have led to escalating court sanctions since 2023.

From Fines to Disqualification

AI malpractice risk for lawyers stopped being hypothetical in June 2023, when attorneys Steven Schwartz and Peter LoDuca submitted six fabricated judicial opinions to the Southern District of New York. All six were invented by ChatGPT, filed in a personal injury case against Avianca Airlines. When opposing counsel flagged the fake cases, Schwartz doubled down. He asked ChatGPT whether the citations were real. It told him they “indeed exist” and “can be found in reputable legal databases such as LexisNexis and Westlaw.” He filed an affidavit saying as much.

Judge P. Kevin Castel sanctioned both attorneys $5,000 and required them to write letters to every judge whose name appeared in the fabricated opinions. The dollar amount was modest. The precedent was not.

What happened next moved fast.

Five months later in Colorado, attorney Zachariah Crabill used ChatGPT to draft a motion and didn’t verify the citations. That alone might have drawn a fine. But Crabill panicked. He falsely blamed the errors on a legal intern.

The morning of his hearing, he texted his paralegal: “I think all of my case cites from ChatGPT are garbage... I have no idea what to do.” Asked later whether he’d double-checked, his answer was blunt: “No. Like an idiot.” The Colorado Office of Presiding Disciplinary Judge suspended him for a year and a day, with 90 days served. First attorney suspended for AI misuse. And the dishonesty is what made the difference.

By 2025, the pattern was clear, and it wasn’t limited to solo practitioners cutting corners. In Wadsworth v. Walmart, three attorneys from Morgan & Morgan (the 42nd-largest firm in the country) submitted motions citing nine cases. Eight didn’t exist. The citations came from MX2.law, the firm’s own proprietary AI platform. Not ChatGPT. Not a free tool. Their own system.

Judge Kelly Rankin imposed $5,000 in sanctions and revoked lead counsel’s pro hac vice (temporary court) admission, noting that “blind reliance on another attorney can be an improper delegation of this duty and a violation of Rule 11.”

Then came Johnson v. Dunn, and the tone shifted entirely.

Three attorneys from Butler Snow LLP, a firm with over 400 lawyers and its own AI Committee, submitted five fabricated ChatGPT citations in a prison litigation case. The firm had internal AI policies. It had issued warnings. None of it mattered. Judge Anna Manasco wrote a 51-page opinion imposing the most severe AI-related sanctions to date: disqualification of all three attorneys from the case, a public reprimand requiring disclosure to clients, colleagues, and judges across every pending matter, and referral to state bar authorities for discipline.

And in March 2026, the Sixth Circuit added an exclamation point. In Whiting v. City of Athens, two Tennessee attorneys submitted briefs containing over two dozen fake citations and misrepresentations. When the court raised concerns, they accused the judges of engaging in a “vast conspiracy” to harass them. The court’s response: $15,000 in punitive fines per attorney, full reimbursement of opposing counsel’s fees, double costs, and referral for disciplinary review. (The court didn’t attribute the fabricated citations to AI specifically, but the pattern of fictitious authorities mirrors every documented hallucination case.)

The common thread in every severe sanction isn’t AI use itself. It’s what happened after the hallucination: silence, cover-ups, blame-shifting, defiance. Courts have shown patience for honest mistakes and genuine unfamiliarity with new tools. They’ve shown none for lawyers who can’t be bothered to read their own briefs.

Malpractice If You Do, Malpractice If You Don’t

The sanctions cases all assume AI use gone wrong. But what happens when the risk runs the other direction?

In January 2026, Judge Jesse Furman of the Southern District of New York said something at an NYSBA panel that flipped the AI malpractice conversation: “I heard somebody say employers are risking malpractice by relying too much on AI. I think there may come a point where it’s the opposite, where you’re committing malpractice if you don’t incorporate AI into your practice.”

Dean Andrew Perlman of Suffolk University Law School has argued that lawyers who fail to adopt AI “will be considered as incompetent as lawyers today who do not know how to use computers, email, or online legal research tools.” No attorney has been sanctioned for failing to use AI. Not yet. But the prediction has a historical pattern behind it.

Consider e-discovery. In 2012, Magistrate Judge Andrew Peck approved technology-assisted review in Da Silva Moore v. Publicis Groupe, calling it the first such order. Three years later, in Rio Tinto v. Vale, he elevated TAR to “black letter law.” By 2016, in Hyles v. New York City, he suggested that “there may come a time when TAR is so widely used that it might be unreasonable for a party to decline to use TAR.” That progression took four years. Generative AI is on a similar trajectory, just faster and with broader implications.

The analogy isn’t perfect. Unlike TAR, no court has mandated generative AI adoption. The parallel is in trajectory, not mechanism. But the direction is the same.

For most midsize litigation firms today, the non-use risk is still theoretical. The misuse risk is immediate and documented. But the gap between “theoretical” and “actionable” is closing.

Then there’s the question of who pays when it goes wrong.

Most lawyers’ professional liability policies don’t explicitly exclude AI-related claims yet. Mark Bassingthwaighte, Risk Manager at ALPS (the nation’s largest direct writer of legal malpractice insurance), confirmed as much in 2025. But he added a warning: if a lawyer blindly accepts AI output without any independent review, “an insurer could argue that no professional service was ever provided because the lawyer simply chose to blindly rely on third-party technology. No professional service means no coverage.”

The market is shifting. Berkley Insurance introduced what it called an “absolute” AI exclusion for D&O, E&O, and fiduciary liability policies. Some firms renewed with affirmative AI coverage conditioned on documented governance policies. Others encountered exclusions that removed AI-related claims entirely. The direction is clear even if the timeline isn’t.

So the bind looks like this: use AI carelessly and you risk sanctions, bar referral, and a coverage gap. Avoid AI entirely and you may face competence challenges, fee disputes, and competitive disadvantage as the standard of care shifts. Neither extreme is safe.

What the Rules Actually Require

ABA Formal Opinion 512, issued in July 2024, is the closest thing to a national standard for lawyer AI ethics obligations. It’s 15 pages long. Here’s what matters.

On competence, the opinion is direct: “Lawyers must understand the capacity and limitations of GAI and periodically update that understanding.” It warns that “a lawyer’s reliance on a GAI tool’s output, without an appropriate degree of independent verification or review of its output, could violate the duty to provide competent representation.”

On supervision, it requires firms to “establish clear policies regarding the law firm’s use of generative AI” and provide training. The opinion treats AI tools as nonlawyer assistants requiring supervision under Model Rules 5.1 and 5.3. Think of it this way: you wouldn’t let an unsupervised first-year associate file a brief without review. The ABA says the same standard applies to AI.

On fees, the opinion includes a detail that should change how you think about billing. If AI reduces a three-hour task to 15 minutes, you can only bill for 15 minutes on an hourly basis. And you can’t charge clients for learning how to use the tool.

Forty states, D.C., and Puerto Rico have now adopted the technology competence duty in Comment 8 to Model Rule 1.1. Hundreds of federal judges have adopted standing orders or local rules governing AI use in their courtrooms. Some require disclosure. Others mandate certification. A few ban AI entirely. The patchwork creates real compliance headaches for firms practicing across jurisdictions.

What about the tools themselves? A Stanford HAI/RegLab study, the first preregistered empirical evaluation of legal AI tools, found that general-purpose chatbots like ChatGPT and Claude hallucinated 58 to 82 percent of the time on legal queries. Legal-specific tools performed better but not well: LexisNexis AI hallucinated 17 percent of the time, and Thomson Reuters products hallucinated between 17 and 33 percent depending on the query type.

Not even close to reliable.

Here’s the governance gap that concerns managing partners: while 80 percent of AmLaw 100 firms have established AI governance boards, only 44 percent of law firms overall have formal AI policies in place. If you’re running a 10-attorney firm, the odds are against you having a policy. And the absence of a policy is itself a risk factor, both for sanctions and for insurance coverage.

Building a Defensible AI Practice

The verification framework fits on a notecard. Following it consistently is the hard part.

Every AI-generated citation needs to be checked against its original source. Not confirmed to exist. Read in full. Does the case say what you’re claiming it says? Is it still good law? Does it apply in your jurisdiction? Shepardize or KeyCite it (verify the citation is still valid law). ABA Opinion 512 says to treat AI output with the scrutiny you’d apply to work from “an inexperienced or overconfident nonlawyer assistant.” That’s a useful mental model.

Choose your tools carefully. The Stanford data shows a massive gap between general-purpose chatbots (58 to 82 percent hallucination) and purpose-built legal AI tools (17 to 33 percent). That difference matters. ChatGPT is not a legal research platform, and treating it like one is how most sanctions cases start.

Write it down. Document which AI tools you used, what prompts you entered, and what review steps you took. If a court ever questions your process, the firms with documentation will be the ones that can demonstrate reasonable care. The firms without it will be explaining why they have nothing to show.

If your firm doesn’t have an AI policy yet, start simple. Some firms use a traffic-light system. Red: prohibited uses, like inputting confidential client data into public AI. Yellow: approved with restrictions, like enterprise AI with mandatory human review. Green: vetted tools on low-risk tasks. You don’t need a governance board to draw those lines. The Johnson v. Dunn case shows that having a policy is not enough. It has to be enforced.

For any AI-assisted privilege review, get a court-ordered FRE 502(d) protection order (a federal rule that binds all proceedings, not just the parties). Unlike party agreements under 502(e), a 502(d) order binds all federal and state proceedings. It’s your safety net if privileged material slips through an automated review.

Be careful what you feed into AI tools. If you input privileged client communications into a consumer AI platform, you may waive privilege over both the input and the original documents. A federal court recently addressed this exact issue, ruling that documents created with consumer AI tools are not protected by attorney-client privilege -- we covered the full implications in our post on privilege risks from AI use. Enterprise tools with contractual confidentiality obligations are safer, but the legal landscape is still evolving.

Be honest with your clients about how you’re using AI. The ethics opinions all point in this direction. Florida’s Advisory Ethics Opinion 24-1 requires informed consent before using AI tools that process confidential information. Texas Opinion 705 requires billing transparency. The trend is toward disclosure, and getting ahead of it costs nothing.

And one more thing, since we should be honest with you too: we build AI document review tools at Hintyr, so we have a stake in this conversation. Every claim our tools generate links back to the source document it came from, so you can verify it before it goes into a filing. We’ve tried to ground every claim in this post with linked, verifiable sources. Read the primary materials and draw your own conclusions.

The standard of care is forming right now, whether your firm has a policy in place or not.

Disclaimer: This blog post is published by Hintyr for informational purposes only and does not constitute legal advice. The discussion of ethics rules, case law, and insurance coverage is general in nature and may not reflect the rules applicable in your jurisdiction. Attorneys should consult their state bar’s ethics opinions, their malpractice carrier, and qualified legal counsel before making decisions about AI adoption or compliance. No attorney-client relationship is created by reading this post.

Review documents with AI you can verify.

Hintyr builds document review tools designed around the verification workflows described in this post. See how verification-first document review works.