AI Ethics
How to Supervise AI Like You’d Supervise a Junior Associate
You wouldn’t sign a brief you hadn’t read, whether it was drafted by a first-year or by software. ABA Rules 5.1 and 5.3 say the duty to supervise AI is already yours.

Computational scientist with a Ph.D. from UC Irvine and peer-reviewed research in NLP, deep learning, and large-scale data modeling. Over a decade of experience building systems that process complex document sets at scale. Founded Hintyr to bring defensible AI workflows to litigation teams navigating document review, redaction, and production.
Key Takeaways
- ABA Formal Opinion 512 (2024) holds that Rules 5.1 and 5.3 require lawyers to supervise AI with the same rigor they apply to human subordinates. This isn’t new law. It’s old law applied to new tools.
- Over 1,200 legal decisions worldwide have addressed AI-fabricated content in court filings, with penalties escalating from fines to license suspensions, case disqualification, and bar referrals.
- Across multiple surveys, 79-92% of legal professionals use AI, but only 9% of firms have written, enforced AI policies. That gap is a Rule 5.1(a) problem.
- 90% of cases involving AI-tainted filings came from solo or small practices (Stanford Cyberlaw, 2025). This risk is not theoretical, and it is not evenly distributed.
- Rule 5.3’s 2012 title change from “Nonlawyer Assistants” to “Nonlawyer Assistance” is interpreted by ABA Opinion 512 and multiple state bars to cover technological tools, including AI, not just people.
- The duty of competence under Rule 1.1 may soon require AI use, not just permit it. Lawyers who refuse to learn AI risk the same professional exposure as those who misuse it.

Most Firms Use AI. Almost None Supervise It.
The surveys don’t measure the same thing, but they point in the same direction.
The Clio 2025 Legal Trends Report found that 79% of legal professionals use AI. The Wolters Kluwer 2026 Future Ready Lawyer Survey puts that figure at 92%. The 8am 2026 Legal Industry Report, surveying over 1,300 legal professionals, found 69% using generative AI for work, more than doubled from 31% the prior year.
Now the other side. Only 9% of firms have a written, actively enforced AI policy (8am 2026). 43% have no policy and no plans to create one. 54% provide no AI training and plan none.
The gap between individual adoption and institutional oversight is where the professional responsibility problem lives. Rule 5.1(a) requires partners and managing lawyers to establish “measures giving reasonable assurance” that everyone at the firm follows the Rules of Professional Conduct. When the vast majority of your team uses AI but your firm has no written guidance on how, you’ve got a 5.1(a) problem whether you know it or not.
And the underground AI issue makes things worse. Axiom Law’s 2024 survey of 300 in-house counsel found that 83% use AI tools their company didn’t provide, and 81% acknowledged using tools that haven’t been formally approved. Tomasz Zalewski, writing in the Wolters Kluwer Future Ready Lawyer report, put it directly: “If they do not get secure tools, they will have ‘shadow AI.’ Bans and restrictions do not work.”
Here’s what makes this specifically dangerous for smaller firms. A Stanford Cyberlaw study from October 2025 analyzed 114 attorney cases and found that 90% of cases involving AI-tainted filings came from solo or small practices. Solo practitioners and small firms aren’t just using AI without supervision. They’re the ones getting caught. For more on the financial and professional consequences of these failures, see our analysis of AI malpractice risk for lawyers.
Many firms handle AI responsibly through existing quality control, partner review of filings, and cite-checking workflows. A written policy is not the only way to satisfy Rule 5.1(a), but it is the easiest to defend when a disciplinary committee asks what measures you had in place. The question isn’t whether your firm has had a problem. It’s whether your current process would hold up to a disciplinary committee’s inquiry into what “reasonable efforts” you made.
Your Duty to Supervise AI Under Rules 5.1 and 5.3
You already know Rules 5.1 and 5.3. You studied them for the MPRE. But you may not have read them recently through the lens of AI, and the ABA’s 2024 opinion makes the connection explicit enough that it’s worth revisiting.
Rule 5.1(a) places a structural duty on partners and managing lawyers: you “shall make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that all lawyers in the firm conform to the Rules of Professional Conduct.” ABA Formal Opinion 512 maps it directly onto AI. The opinion states that “managerial lawyers must establish clear policies regarding the law firm’s permissible use of GAI.” If you’re a partner and your firm doesn’t have an AI use policy, you may already be out of step with this duty.
(A note on authority: ABA Formal Opinions are advisory, not binding in any jurisdiction. But every state bar that has addressed AI has reached the same conclusion, and courts are citing Opinion 512 as the benchmark. The consensus is real even if the opinion itself isn’t enforceable.)
Rule 5.1(b) targets the day-to-day: if you supervise other lawyers, you “shall make reasonable efforts” to ensure they follow the Rules when using AI tools. That means knowing whether your associates are running drafts through ChatGPT and whether they’re verifying the output.
Rule 5.3 is where things get interesting. In 2012, the ABA changed the rule’s title from “Responsibilities Regarding Nonlawyer Assistants” to “Responsibilities Regarding Nonlawyer Assistance.” One word. But it broadened the rule’s scope from people to processes, a change interpreted by ABA Opinion 512 and multiple state bars to cover technological tools, including AI. Opinion 512 runs with that change, holding that lawyers who rely on generative AI “risk many of the same perils as those who have relied on inexperienced or overconfident nonlawyer assistants.” And the training obligation is specific: firms must ensure nonlawyers understand “the basics of GAI technology, the capabilities and limitations of the tools, ethical issues in use of GAI and best practices for secure data handling.”
Opinion 512 also incorporates the outsourcing analysis from ABA Formal Opinion 08-451 (2008). That earlier opinion held that a lawyer “may outsource legal or nonlegal support services provided the lawyer remains ultimately responsible for rendering competent legal services.” The vendor due diligence factors from that opinion (reference checks, security policies, confidentiality agreements) now apply to AI tools too.
So what do “reasonable efforts” look like in practice? The standard is proportional. A five-attorney firm probably doesn’t need an AI oversight committee with a written charter. But it does need a clear policy, even a one-pager, on which tools are approved, what data can go into them, and who checks the output. A fifty-attorney firm likely needs designated oversight, training sessions, and documented approval processes. The common thread: someone has to own this. For more on how AI tools interact with confidentiality protections, see our analysis of AI and attorney-client privilege risk.
Why “Supervise It Like a Junior Associate” Works as a Framework
The idea of treating AI like a junior team member didn’t start as a blog post headline. It comes from the ethics opinions themselves.
The D.C. Bar, in Ethics Opinion 388 (2024), offered the most memorable framing: AI is “an omniscient, eager-to-please intern who sometimes lies to you.” A British Institute of International and Comparative Law report advises lawyers to “treat AI tools like young associates,” noting they “might be a fantastic asset, but lawyers need to invest, supervise, verify, have a little bit of scepticism.” DLA Piper describes AI as a “subordinate” requiring “direction and supervision.” And Professor Michael Murray’s scholarship frames AI as a “staff attorney” that can be valuable “if used in appropriate situations and under constant supervision by human lawyers.”
“An omniscient, eager-to-please intern who sometimes lies to you.”
– D.C. Bar Ethics Opinion 388 (2024)
The analogy works because it maps onto instincts you already have. You know what it means to supervise a first-year. You review work product before it goes out. You check the reasoning, not just the conclusion. You verify citations. You ask follow-up questions when something feels off. And you make the final call on strategy and judgment. None of that changes when the work product comes from software instead of a person.
But the analogy has limits, and those limits matter. A junior associate learns from correction; the next memo improves because of your feedback. Most AI tools don’t work that way. They won’t internalize your firm’s standards over time or develop professional judgment through mentorship. AI also operates at a scale that makes full review harder. A first-year might draft three memos a week. An AI tool can produce thirty in a day.
And that’s precisely why the analogy is useful in the direction that matters: AI requires more supervision than a junior associate, not less. It can’t learn, can’t self-correct, and can’t flag its own uncertainty. When a first-year doesn’t know an answer, they ask you. When AI doesn’t know an answer, it makes one up. Stanford HAI research found that general-purpose chatbots hallucinated 58% to 82% of the time on legal queries.
You’d never let a first-year file a brief you hadn’t read. Same standard.
What Courts Are Doing to Lawyers Who Don’t Verify AI Output
In early 2023, attorney Steven Schwartz cited six fabricated judicial opinions in a brief filed in Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). He used ChatGPT for legal research and never checked whether the cases existed. The attorney of record, Peter LoDuca, signed the brief without reading a single cited case. Judge P. Kevin Castel imposed a $5,000 penalty. The fine was modest. The reputational damage was not.
Then the delegation chains got longer. In Mezu v. Mezu, No. 361, Sept. Term 2025 (Md. Ct. Spec. App.), an attorney admitted he never read the cases cited in his brief. He had relied on his law clerk, a nonlawyer, who also never read them. The law clerk used ChatGPT to search for relevant cases, received a generated list that included nonexistent citations, then searched Google, VLex, and other platforms to extract what looked like proper citations. Attorney to nonlawyer to AI to nobody verifying anything. The court referred the attorney to the Attorney Grievance Commission. This is the cascading failure Rule 5.3 was written to prevent.
Having a policy doesn’t fix this if the people enforcing it break it themselves. In Johnson v. Dunn, No. 2:21-cv-1701 (N.D. Ala. July 2025), Butler Snow, a large, well-regarded firm, had established AI policies including practice group leader approval requirements. But an attorney on the team inserted ChatGPT-generated citations into a motion without verification. The court publicly reprimanded the attorneys, disqualified them from the case, and referred them to the Alabama State Bar, declaring that fines and public embarrassment were not effective deterrents and that substantially greater accountability was needed. For a breakdown of the 300-plus judicial AI standing orders now in effect, see our guide to AI disclosure rules.
Proprietary tools don’t solve the problem either. In Wadsworth v. Walmart Inc., 348 F.R.D. 489 (D. Wyo. 2025), Morgan & Morgan attorney Rudwin Ayala used the firm’s own AI tool, “MX2.law.” Eight of nine cited cases did not exist. The supervising attorney directed the work but never reviewed it. The court sanctioned three attorneys a total of $5,000, with Ayala receiving the largest share at $3,000 and having his pro hac vice admission revoked.
These cases are sometimes dismissed as “bad lawyer” problems, not AI problems. That’s partly right, and that’s the point. Every sanctioned attorney failed at something basic: reading their own filing before submitting it. But AI makes this failure easier to commit, harder to catch, and faster to scale. Damien Charlotin’s global database now tracks over 1,200 cases. Among the millions of filings submitted annually, that is a small fraction, but the trajectory is accelerating and the consequences are intensifying. The trajectory runs from warnings (2023) to fines to disqualification, bar referrals, and license suspension. Courts have made clear that the standard isn’t changing. The consequences are.
A Daily AI Supervision Protocol for Your Firm
You already know how to supervise unreliable work product. You’ve done it with every associate who walked through your door. Here are seven steps that map your existing oversight instincts onto AI, scaled to firm size.
1. Approve tools before anyone uses them. Rule 5.1(a) requires firm-level measures. That starts with knowing what’s in use. Audit every AI tool at your firm, including the ones nobody approved. If your associates are using Claude, Gemini, ChatGPT, and browser extensions you’ve never heard of, find out before you write a policy about what should happen. For firms handling document review, tools designed to keep a human attorney in the supervision loop reduce the verification burden compared to general-purpose chatbots.
2. Set clear boundaries on what AI handles alone. Not all tasks carry the same risk. Using AI to brainstorm case theory or outline a memo is different from using it to draft a motion for filing. Opinion 512 itself acknowledges this: generating ideas “may require less independent verification” than legal research or drafting. Define which tasks need full review and which need a lighter touch.
3. Require upward disclosure. The American Inns of Court sample policy captures this well: “Attorneys and non-attorney staff must affirmatively advise all supervising attorneys when GenAI will be or has been used in the creation of work product.” If your associate used AI and doesn’t tell you, you can’t supervise what you don’t know about. Rule 5.3 makes this your problem, not theirs.
4. Disclose AI use to clients. ABA Formal Opinion 512 and state bar guidance increasingly expect lawyers to inform clients when AI is used on their matters, particularly where confidential information enters AI systems. NYC Bar Formal Opinion 2025-6 addresses this directly. Your engagement letter should address AI use explicitly, not through boilerplate.
5. Check the citations. Every time. No exceptions for filings. No exceptions for “it looked right.” The verification obligation under Rule 1.1 is context-specific, but for anything going to a court or a client, every case citation, quotation, and factual claim needs independent confirmation. Some review tools now surface clickable source references alongside every AI response, letting you verify a claim by opening the cited document at the exact page rather than hunting for it manually. This is the step where every sanctioned attorney failed.
6. Document the process. Keep records of who used what AI tool, on which matter, and what verification was performed. This isn’t optional caution. It’s the audit trail a disciplinary committee will ask for. Hintyr, for example, was built for attorney-supervised AI review: it logs every AI-assisted action so the supervising attorney can see exactly what the AI flagged, what was reviewed, and what decisions were made, and lets you export your review notes to PDF for clients, courts, or co-counsel. Tools with built-in validation workflows that document your review decisions make this step lighter than maintaining a manual log.
7. Train everyone. Then train them again. Opinion 512 requires that nonlawyers understand “the basics of GAI technology, the capabilities and limitations of the tools, ethical issues in use of GAI and best practices for secure data handling.” A 30-minute training session is fine for a start. But schedule it annually, because both the tools and the rules keep changing.
What “reasonable efforts” look like scales with your firm. A solo practitioner’s version might be a one-page policy, consistent personal verification habits, and awareness of which tools handle data safely. A 50-attorney firm likely needs an AI committee, formal training programs, and tool-approval processes. The standard is proportional, not absolute. For more on the confidentiality protections your AI policy must address, see our guide to ethical walls and AI tools.
Why Refusing to Learn AI Is Its Own Risk
The supervision duty runs in two directions. Misuse AI, and you face sanctions. Ignore AI, and you may face a different kind of professional exposure.
Dean Andrew Perlman of Suffolk University Law School has made the most provocative version of this argument: “The duty of competence may eventually require lawyers’ use of generative AI. The technology is likely to become so important to the delivery of legal services that lawyers who fail to use it will be considered as incompetent as lawyers today who do not know how to use computers, email, or online legal research tools.”
That’s not settled law. It’s an argument from one of the profession’s most credentialed ethics scholars, and it hasn’t been adopted by any state bar. But the direction is clear enough to take seriously.
Comment [8] to Rule 1.1, amended in 2012, says competent practice includes understanding “the benefits and risks associated with relevant technology.” Forty states, the District of Columbia, and Puerto Rico have adopted that language. The ABA’s 2025 Task Force report concluded that AI has moved “from experiment to infrastructure” for the legal profession.
And the insurance market is already pricing the risk. Some malpractice carriers are adding AI-specific sublimits or exclusions to professional liability policies. Berkley Insurance introduced what it calls an “Absolute” AI exclusion, covering claims related to “any actual or alleged use, deployment, or development of Artificial Intelligence.” If your policy has an AI exclusion and you’re using AI without documented supervision protocols, you may be uninsured for the exact scenario these rules are designed to prevent.
The duty isn’t to avoid AI. It’s to supervise it. The same way you’d supervise a junior associate who’s smart, fast, confident, and occasionally fabricates the law.
Disclaimer: This post is for informational purposes only and does not constitute legal advice. AI supervision obligations vary by jurisdiction. Consult your state bar’s ethics guidance and, if needed, a legal ethics attorney for advice specific to your practice.
Build AI Supervision Into Your Workflow
You already supervise your team’s work. AI should be no different. Hintyr gives solo and small firm attorneys a controlled, auditable environment where AI agents work under your supervision, not in place of it, so you can meet your duties under Rules 5.1 and 5.3 without adding hours to your day.