AI Ethics

AI Disclosure in Court: 300+ Rules You Need to Track

In May 2023, one federal judge required AI disclosure. By March 2026, over 300 have followed, and no two rules match. Here’s what triggers disclosure, what doesn’t, and what happens when you get it wrong.

Alexander Cohan, Ph.D.

Alexander Cohan, Ph.D.

Computational scientist with a Ph.D. from UC Irvine and peer-reviewed research in NLP, deep learning, and large-scale data modeling. Over a decade of experience building systems that process complex document sets at scale. Founded Hintyr to bring defensible AI workflows to litigation teams navigating document review, redaction, and production.

Hintyr builds AI document review software for law firms. We have a commercial interest in this topic. Every claim in this article is independently sourced.

AI disclosure requirements in court filings with digital legal overlay
Over 300 judges now require AI disclosure in court filings, with no two rules alike.

The Certification You Didn’t File

The Compliance Blind Spot

You ran the research through Westlaw AI. You verified the citations. The brief reads well, the argument is sound, and you file it on time. No AI disclosure attached.

Under dozens of federal standing orders, you may have just violated a requirement you didn’t know existed.

Three years ago, zero rules existed. Now more than 300 court directives (standing orders, local rules, and judicial requirements tracked by Ropes & Gray and Bloomberg Law), issued by federal and state judges across the country, each with its own scope and its own definition of what counts as “AI.” Some target generative AI only. Some cover any AI tool, including the research platforms you already pay for. A few ban AI-drafted filings outright. And the consequences for guessing wrong have moved well past embarrassment.

The question for law firms has shifted. It’s no longer “should we have an AI policy?” It’s whether your current workflow actually accounts for one. How many of those 300+ orders apply to your next filing?

Because here’s what the last three years have shown: courts aren’t punishing lawyers for using AI. They’re punishing lawyers who use AI without verification or disclosure, and without the basic professional judgment that Rule 11 has always demanded. The fines started small. They aren’t small anymore.

What follows is the full picture: five sanctions cases that trace the escalation from a $5,000 fine to attorney disqualification, a breakdown of the 300+ standing orders and what actually triggers them, the federal and state rules coming this year, and five steps your firm should take this week. If you practice in more than one jurisdiction, the compliance math gets complicated fast.

What Are the Penalties for Not Disclosing AI in Court?

From $5K to Disqualification

June 2023. Mata v. Avianca, S.D.N.Y. Attorneys Steven Schwartz and Peter LoDuca submitted six fabricated cases generated by ChatGPT in a personal injury suit. Schwartz later testified he “was operating under the false perception that [ChatGPT] could not possibly be fabricating cases on its own.”

Judge P. Kevin Castel imposed a $5,000 penalty and ordered letters to every judge whose name appeared in the fake opinions. The dollar amount barely registered. The precedent registered everywhere.

Late 2023. People v. Crabill, Colorado. Zachariah Crabill, two years out of law school, used ChatGPT to draft a motion. When the judge caught fabricated citations, Crabill blamed a legal intern. But his messages to a paralegal undercut the claim.

He’d messaged his paralegal: “I think all my cases cited from ChatGPT are garbage... I have no idea what to do.” The misrepresentation drew a one-year-and-one-day suspension (90 days served, remainder stayed upon completion of a two-year probation), making Crabill the first attorney disciplined with suspension for AI misuse. The original error might have earned a fine. The dishonesty earned a career setback.

July 2025. Johnson v. Dunn, N.D. Ala. This is the case that changed the calculus. Three attorneys from Butler Snow, a firm with over 350 attorneys and its own internal AI policies, submitted five fabricated ChatGPT citations in a prison litigation case. Judge Anna Manasco’s sanctions order didn’t mince words: “If fines and public embarrassment were effective deterrents, there would not be so many cases to cite.” She disqualified all three attorneys from the case and referred them to their state bars. Her closing observation landed hardest: “They benefitted from repeated warnings, internal controls, and firm policies... And yet here we are.”

February 2026. Kenosha County, Wisconsin. District Attorney Xavier Solis submitted a filing in a criminal case containing AI-generated hallucinations and failed to disclose the AI use. The court struck the filing and sanctioned Solis, making him reportedly the first prosecutor sanctioned for AI misuse in a U.S. court. Separately, the judge dismissed all 74 charges without prejudice, finding insufficient probable cause based on evidence from a preliminary hearing two years earlier. The AI sanction and the dismissal occurred in the same proceeding but on independent grounds. For the DA’s office, the combination was a public failure on two fronts.

July 2025. ByoPlanet v. Johansson, S.D. Fla. Attorney James Martin Paul used ChatGPT to draft complaints, motions, and briefs across eight related cases. When opposing counsel flagged fabricated citations in April, Paul didn’t stop. He submitted seven more filings with hallucinated authorities, including in his response to the court’s own show-cause order.

Judge David Leibowitz found “repeated, systemic, and bad-faith misuse” and ordered Paul to pay approximately $86,000 in fees, dismissed all four federal cases, required him to attach the sanctions order to every future filing in the district for two years, and referred him to the Florida Bar. The tools didn’t fail. The process did.

March 2026. Whiting v. City of Athens, 6th Cir. Two attorneys submitted appellate briefs containing over two dozen fabricated or misrepresented citations, including non-existent cases and fabricated quotations from real ones. When the court confronted them, the attorneys alleged judicial bias rather than addressing the errors. The Sixth Circuit’s response: $15,000 per attorney in punitive fines, full reimbursement of opposing counsel’s fees, double costs, and disciplinary referral. The court pointedly noted that citations “however generated” must be verified. The court didn’t confirm AI was involved. It didn’t need to.

Add the Jordan v. Chicago Housing Authority sanctions from Cook County (Judge Thomas Cushing, December 2025; $10,000 against attorney Larry Mason and $49,500 against Goldberg Segalla, totaling $59,500). In March 2026, the DOJ itself became a cautionary tale when Assistant U.S. Attorney Rudy Renfer was terminated after filing a brief in Fivehouse v. Department of Defense (E.D.N.C.) containing fabricated quotes that a pro se plaintiff, a retired Air Force JAG officer, identified and reported to the court.

The picture is clear. Not every case has ended this severely. Many courts have responded with warnings or mandatory CLE. But the direction of the most serious cases is clear: sanctions have escalated from symbolic to career-altering, from solo practitioners to AmLaw firms. They’ve reached the circuit level.

Which Courts Require AI Disclosure?

The fault lines

When Judge Brantley Starr of the Northern District of Texas posted the first federal AI standing order on May 30, 2023, he was alone. By mid-2024, roughly 36 orders existed across 13 states. By late 2025, the Ropes & Gray tracker counted over 300 court directives -- standing orders, local rules, and general orders -- across federal and state courts. Bloomberg Law’s April 2025 analysis identified 39 federal judges with standing orders; only Hawaii and Nebraska had adopted district-wide rules. Everywhere else, it’s judge by judge.

The orders cluster into three approaches.

ApproachExample courtsWhat triggers disclosure
Generative AI onlyN.D. Tex. (Starr), W.D.N.C. (court-wide), D. Haw., D. Neb.ChatGPT, Claude, Gemini; gray zone for Westlaw AI, Lexis+ AI
All AI toolsE.D. Pa. (Baylson)Any AI including TAR, contract review, potentially Grammarly
No AI-specific rule5th Cir.; N.D. Ill. (Fuentes, withdrawn); Ill. Sup. Ct.Existing Rule 11 applies; no additional certification required

Most target generative AI specifically. Judge Starr’s order requires attorneys to certify that “no portion of any filing will be drafted by generative artificial intelligence” or that AI-drafted language was “checked for accuracy, using print reporters or traditional legal databases, by a human being.” The Western District of North Carolina’s court-wide order follows this model, explicitly carving out “standard legal research platforms like Bloomberg, Fastcase, Lexis, or Westlaw.”

A minority cover all AI tools. Judge Michael Baylson of the Eastern District of Pennsylvania requires disclosure if “any type of AI” was used in preparing a filing. That language could theoretically reach TAR (technology-assisted review) platforms, contract review tools, even Grammarly.

A third group rejects AI-specific rules entirely. The Fifth Circuit declined a proposed certification rule in June 2024 after significant attorney opposition, reasoning that existing rules already require accuracy. And Judge Fuentes of the Northern District of Illinois withdrew his own standing order after a year, calling it “no longer necessary and slightly burdensome.” The Illinois Supreme Court went further, calling its permissive stance one of the most open in the country and stating that AI use “should not be discouraged” and “is authorized provided it complies with legal and ethical standards.”

So which AI tools actually trigger disclosure? It depends on where you’re filing, and the answer falls into three tiers.

General-purpose AI (ChatGPT, Claude, Gemini) triggers disclosure under every order that mentions AI. No exceptions. No gray area.

Legal AI tools occupy messy middle ground. Most orders carve out traditional research databases. But the newer generative features in Westlaw AI-Assisted Research and Lexis+ AI use the same large language models as their general-purpose counterparts. A Stanford study by Magesh et al., published in the Journal of Empirical Legal Studies in 2025, found these retrieval-augmented systems hallucinate at rates from 17% (Lexis+ AI) to 33% (Westlaw AI-Assisted Research). Thomson Reuters and LexisNexis have both disputed the study’s methodology, and the debate over these numbers isn’t settled.

And in the K&L Gates sanctions case from the Central District of California, attorneys were sanctioned for relying on CoCounsel, Westlaw Precision, and Google Gemini without adequate verification. Purpose-built legal AI tools can still get you sanctioned.

E-discovery platforms, contract analysis tools, and grammar aids are generally outside the scope of orders targeting “generative AI.” But under Judge Baylson’s “any AI” standard, they could be in play. What about Grammarly? Most orders don’t carve it out.

This raises a legitimate question: doesn’t Rule 11 already cover all of this? In theory, yes. Every filing carries an implicit certification of accuracy regardless of how it was prepared. But the numbers tell a different story. As of October 2025, Ropes & Gray had catalogued 66 published sanctions opinions as of October 2025, with at least 19 that year alone. By March 2026, the number has grown well past that. The Charlotin database tracks nearly 1,200 cases globally involving AI hallucinations in legal filings. Rule 11 exists. It has existed for decades. And attorneys are still submitting fake citations at a rate that has no modern parallel. The standing orders exist because the existing rules, on their own, weren’t enough.

One data point should settle the disclosure question for anyone still on the fence: to date, no reported case has sanctioned an attorney for over-disclosing AI use.

What AI Disclosure Rules Are Coming?

The Rules Pipeline

If you’re hoping for a single federal standard that replaces the 300+ standing orders, you’ll be waiting a while. But the regulatory pipeline is active, and the direction is consistent: more obligation, not less.

Proposed Federal Rule of Evidence 707 would govern the admissibility of machine-generated evidence in federal court. The Advisory Committee on Evidence Rules voted 8-1 to seek public comment in May 2025. Publication followed in June, after the Standing Committee approved it. The comment period closed February 16, 2026, and the earliest possible effective date is December 1, 2027.

But Rule 707 addresses evidence admissibility, not disclosure of AI use in drafting filings. The gap between the two remains filled only by standing orders and Rule 11.

California is moving on two fronts. The Judicial Council unanimously adopted Rule 10.430 in July 2025 (effective September 1, 2025), requiring courts that use generative AI to have written policies by December 15, 2025. That rule governs courts and judicial staff, not litigants. For litigants, the California Supreme Court directed the State Bar to consider incorporating AI principles into the Rules of Professional Conduct. COPRAC then approved proposed amendments to Rules 1.1, 1.4, 1.6, 3.3, 5.1, and 5.3, covering competence, communication, confidentiality, candor, supervision of lawyers, and supervision of nonlawyers. The comment period runs through May 4, 2026. If adopted, California would have the most detailed state ethics rules for AI in the country.

Florida’s 11th and 17th Circuits began requiring AI disclosure in filings in early 2026 through administrative orders. The state had already amended its Rules Regulating the Florida Bar in October 2024 to address AI competence, confidentiality, candor, and supervision.

New York has the most complicated picture of any state. Proposed Part 161 would require attorneys to certify that filings don’t contain “fabricated or fictitious content generated by AI,” but the Advisory Committee deliberately chose not to require affirmative disclosure of AI use. Meanwhile, individual state court judges (Bannon, Weinmann, Hanlon, Maslow) each have their own standing orders requiring disclosure, tool identification, and specification of AI-drafted portions. And Senate Bill S2698 would codify AI disclosure in the CPLR (Civil Practice Law and Rules).

Louisiana remains the only state with a statute (Act No. 250, effective August 2025) addressing AI-generated evidence. Several other states have formed AI task forces, but none has produced binding rules yet.

Beyond disclosure, courts are now deciding whether AI use can waive privilege entirely. In February 2026, Judge Rakoff of the Southern District of New York ruled in United States v. Heppner that documents a criminal defendant created using a free AI chatbot weren’t protected by attorney-client privilege, a holding the court called one of “nationwide first impression.”

The pace and form remain uncertain, but the direction doesn’t: whether through standing orders, ethics amendments, or local rules, the obligations are increasing. And waiting for uniformity before acting is itself a risk.

What Your Firm Should Do This Week

Five Steps, Starting Now

The rules aren’t waiting for you, so your firm can’t wait for the rules. You don’t need a governance board to start. You need a one-page policy and a few changes to your filing workflow. So where do you start? Five steps, ranked by urgency.

1. Write a one-page AI use policy. Start with three categories. What’s prohibited: inputting confidential client data into public AI tools, submitting AI-generated text without human review. What’s permitted with safeguards: using enterprise AI tools for drafting, research, and summarization, with mandatory verification of every citation and factual claim. What’s unrestricted: standard spell-checking, formatting tools, traditional keyword search.

You can refine the policy over time. Having something written down is what matters now, both for internal discipline and for insurance purposes. Many law firms still don’t have a written AI policy. If yours is one of them, you’re not behind; you’re where most of the profession is. But starting now takes less time than you think. See what that looks like in practice.

2. Mandate human cite-checking for all AI output, including legal-specific tools. The Stanford/Magesh study found hallucination rates of 17-33% even for tools built specifically for legal research. The K&L Gates sanctions involved CoCounsel and Westlaw Precision. And the lesson is plain: no AI tool, regardless of its marketing, gets a pass on verification. Every citation in every filing needs to be read, confirmed, and Shepardized or KeyCited (verified as still-good law using Lexis or Westlaw’s citation-checking tools) by a human. Treat AI output the way ABA Opinion 512 tells you to: as work product from “an inexperienced or overconfident nonlawyer assistant.”

3. Add certification language to your filing templates. Don’t wait for a standing order in your jurisdiction. Build a standard AI disclosure paragraph into your template now. A working version: “I certify that every citation in this filing has been independently verified for accuracy against authoritative legal databases. [No generative AI was used in preparing this document. / Generative AI was used in preparing portions of this document, and all AI-generated content was reviewed for accuracy by a licensed attorney.]” Delete whichever option doesn’t apply. Filing with the certification preloaded takes ten seconds. Filing without it, in a jurisdiction that requires one, can take months to clean up.

4. Bookmark the Ropes & Gray AI Court Order Tracker. The tracker maintained by Shannon Capone Kirk and Amy Jane Longo at Ropes & Gray is the most current public resource for standing orders. It includes an interactive map, color-coded by requirement type. But if you practice across jurisdictions, check it before every filing in an unfamiliar court. Bloomberg Law and Lexis also maintain trackers, but those require subscriber access.

5. Address AI use in your engagement letters. ABA Formal Opinion 512 warns that boilerplate consent in engagement letters is insufficient for AI. If your firm uses AI platforms that process client data (even enterprise systems with data protections), you need specific, informed consent. The privilege risks of AI tools make this especially urgent. Florida’s Ethics Opinion 24-1 recommends obtaining consent before using any third-party generative AI with confidential information. Texas Opinion 705 requires billing transparency when AI reduces the time spent on a task. Get ahead of this in the engagement letter, not after a client asks.

The Standard Is Forming

The inflection point

Three years ago, the AI disclosure question was theoretical. A curiosity. Something to discuss at CLE (continuing legal education) panels and forget about on Monday morning.

That window closed. Quietly at first, then all at once.

Over 300 judges now require some form of AI disclosure. Sanctions have moved from four-figure fines to disqualification and bar referral. The first prosecutor has been sanctioned. The first circuit court has imposed five-figure penalties per attorney. And the empirical evidence says that even the legal-specific AI tools you’re paying for hallucinate often enough to matter.

None of this means you should stop using AI. The tools are too useful, and the competitive pressure is too real. What it means is that the standard of care around AI use is being written right now, in standing orders, sanctions opinions, and ethics amendments, and your firm needs to be on the right side of it.

Write the policy. Verify every citation. Disclose when there’s any doubt. Do it this week. The compliance cost is measured in minutes. The noncompliance cost is measured in careers.

How this article was produced: AI tools were used for research assistance and initial draft generation. All case citations, rule references, and factual claims were independently verified by the author against primary sources. An article about AI disclosure should disclose its own.

We build AI document review tools at Hintyr, so we have a stake in this conversation, and in getting disclosure right ourselves. Every claim in this post is sourced. None of the five steps above require Hintyr or any specific tool. For more on the malpractice risks of AI use in legal practice, see our companion post.

Disclaimer: This blog post is published by Hintyr for informational purposes only and does not constitute legal advice. The discussion of ethics rules, case law, and standing orders is general in nature and may not reflect the rules applicable in your jurisdiction. Attorneys should consult their state bar’s ethics opinions and qualified legal counsel before making AI compliance decisions. No attorney-client relationship is created by reading this post.

Review documents with AI you can verify.

Hintyr is document review software designed around the verification step. Every AI output links to its source document, so you can confirm before you file.