AI Ethics
Ethical Walls in the Age of AI
A solo attorney toggles between two adverse matters in the same ChatGPT session. The AI remembers everything from both sides. That’s not a hypothetical. It’s a Tuesday.

Computational scientist with a Ph.D. from UC Irvine and peer-reviewed research in NLP, deep learning, and large-scale data modeling. Over a decade of experience building systems that process complex document sets at scale. Founded Hintyr to bring defensible AI workflows to litigation teams navigating document review, redaction, and production.
Key Takeaways
- AI tools with persistent memory can silently leak confidential information across client matters.
- Firms without written AI use policies and audit trails are building discoverable evidence against themselves.
- ABA Formal Opinion 512 flags AI conflict risks but doesn’t prescribe specific technical solutions.
- Enterprise information barrier platforms like Intapp Walls remain priced far beyond most small firms.
- Affordable partial solutions exist through Smokeball, Clio, and ChatGPT Enterprise workspace isolation.
- Every AI interaction should be scoped to one client matter as a hard security boundary.

How AI Tools Break Traditional Ethical Walls in Law Firms
You’re a solo practitioner drafting a motion for Client A in a commercial dispute. You switch tabs in the same ChatGPT session, memory enabled, and start outlining discovery strategy for Client B, who happens to be adverse to Client A in a related transaction. Your law firm’s ethical walls don’t cover AI tools. The model has ingested both sides. It’s connecting dots you never asked it to connect.
In United States v. Heppner (S.D.N.Y. 2026), Judge Jed S. Rakoff ruled that communications with a public AI tool don’t qualify for attorney-client privilege because the platform’s privacy policy permits data collection, eliminating any reasonable expectation of confidentiality. But the cross-matter contamination problem is harder to catch because it happens inside the model’s context window, invisible to everyone.
Gabe Pereyra, President and Co-Founder of Harvey AI, published a framework in March 2026 that breaks down why AI agents pose a fundamentally different screening challenge than human lawyers do. AI agents access underlying data directly rather than receiving filtered summaries from a supervising attorney. They maintain context across sessions and time in ways humans simply don’t. And they operate at a scale that makes manual monitoring impractical.
The retrieval-augmented generation problem makes things worse. When firms feed documents into a RAG pipeline, vector embeddings strip away the permission metadata that traditional document management systems rely on. CrowdStrike’s research on RAG security has documented how a query about Matter A can pull semantically similar chunks from Matter B’s documents because the embedding space doesn’t respect access controls. The search is based on meaning, not permissions.
And the enterprise tools aren’t immune either. In January 2026, Microsoft disclosed a bug (case CW1226324) where Copilot for Microsoft 365 bypassed sensitivity labels for nearly a month, pulling content from Sent Items and Drafts folders regardless of DLP policies. This was the second such incident in eight months.
But here’s what matters for honest risk assessment: no firm has been publicly sanctioned for AI cross-matter contamination. Not yet. The risk is structural, not anecdotal, and the conditions are forming as firms move from chat assistants to autonomous agents. No state bar has brought a disciplinary action specifically over AI-facilitated conflict breaches. The question is whether your current practices hold up as the technology shifts from chat assistants to autonomous agents that can read, draft, file, and communicate on your behalf across dozens of matters simultaneously.
Traditional ethical walls assumed human beings would be the ones accessing files. That assumption is breaking.
ABA Opinion 512 and the Ethics Rules Governing AI Access Controls
The Model Rules didn’t anticipate ChatGPT. But they’re surprisingly well-suited to the problem.
Start with Rule 1.1, Comment 8. It says competence includes understanding the “benefits and risks” of relevant technology. If you’re using AI tools on client matters and you don’t understand how those tools handle data across sessions, you’ve got a competence issue before you even reach the confidentiality rules.
Rule 1.6(c) requires “reasonable efforts to prevent the inadvertent or unauthorized disclosure” of client information. Comment 18 spells out the factors: sensitivity of the information, likelihood of disclosure if additional protections aren’t employed, the cost of those protections, and the difficulty of implementing them. Cost is explicitly part of the calculus. A solo practitioner using separate ChatGPT conversations and exercising basic discipline around what gets pasted into prompts may well satisfy the standard. It’s fact-specific.
Rules 1.7 and 1.9 address conflicts directly. An AI tool that retains context from a prior representation and applies it to a current adverse matter creates exactly the kind of conflict these rules target. The difference is that the violation can happen automatically, without anyone making a conscious decision to access a former client’s file.
Rule 1.10 and the screening provisions in Rule 1.0(k) were written for lateral hires carrying confidential knowledge from prior firms. Applying them to AI agents is conceptually straightforward but practically uncharted. State variations matter significantly here. Texas adopted lateral-hire screening provisions in October 2024 after years of prohibiting it, while California has permitted screening since the 2010 Kirk decision, with narrow conditions.
ABA Formal Opinion 512, issued in 2024, addresses AI and confidentiality directly. Read it carefully. It uses “may,” not “must.” It flags the risk of LLMs violating Rules 1.7 and 1.9. It doesn’t prescribe specific technical solutions. No state bar has issued guidance on exactly how to build AI information barriers. Only that you need them.
Over 300 judges across federal and state courts now require some form of AI disclosure in filings. The enforcement infrastructure is building. For a jurisdiction-by-jurisdiction breakdown, see our guide to AI disclosure rules.
Why AI Conflict-of-Interest Protections Cost Too Much for Small Firms
Enterprise-grade information barrier platforms exist. They’re excellent. And they’re priced for AmLaw 200 firms.
Intapp Walls, the market leader, runs $10,000 to over $200,000 in implementation costs depending on firm size and integration complexity. iManage Security Policy Manager and NetDocuments both sit in similar territory. One midsize firm reported NetDocuments pricing at roughly $30,000 per year for 23 users. None of these vendors publish pricing publicly, which tells you something about their target market.
Here’s the reality of American legal practice. Seventy-five percent of U.S. law firms have fewer than six attorneys. Only 41% of solo practitioners budget anything for technology, compared to 90% of firms with 100 or more lawyers. The ABA created an expectation that the majority of practicing attorneys can’t afford to fulfill with purpose-built tools.
Drew Simshaw of the William S. Boyd School of Law at UNLV has warned about a “two-tiered system” emerging in legal AI, where large firms buy enterprise screening tools and deploy AI confidently while small firms either avoid AI entirely, losing competitive ground, or adopt it without adequate protections. Neither outcome serves clients well.
But affordable partial solutions do exist. Smokeball’s Archie works within the Smokeball practice management ecosystem with matter-specific AI queries. Clio Manage includes AI features with matter-level organization built in. Spellbook offers contract drafting AI with zero data retention. These aren’t full information barrier platforms. They’re starting points.
ChatGPT Enterprise and Team plans deserve mention too. Workspace isolation keeps one organization’s data separate from another’s, and the Projects feature lets you create functional matter-level separation within a workspace. It’s procedural rather than architectural. It depends on user discipline. But it’s available at a fraction of Intapp’s cost.
The uncomfortable truth is that the “reasonable efforts” standard in Rule 1.6 accounts for cost. Small firms aren’t held to the same infrastructure expectations as Kirkland & Ellis. But “we couldn’t afford it” has limits as a defense, especially when affordable options, however imperfect, are available. For more on how AI tools interact with privilege protections, see our analysis of AI and attorney-client privilege risk.
Building Effective Information Barriers for AI in Legal Technology
Knowing the rules is one thing. Here’s what to actually do about them.
1. Audit your current AI usage. Start by counting. Every AI tool in use across your firm, including the ones nobody approved. The 2025 Clio Legal Trends Report found that 53% of firms using AI lack formal AI governance policies. Tomasz Zalewski, writing on AI governance in legal practice, put it bluntly: “If they do not get secure tools, they will have ‘shadow AI.’” Your associates are using Claude, Gemini, ChatGPT, and half a dozen browser extensions you’ve never heard of. Find out what’s actually happening before you write a policy about what should happen.
2. Enforce matter-level isolation. Every AI interaction should be scoped to a single client matter. Not as a folder label. As a hard security boundary. Pereyra calls the client matter the “atomic unit” of access control, and he’s right. If your AI tool can pull context from Matter A while you’re working on Matter B, your ethical wall has a hole in it.
Practical options at different price points: ChatGPT Projects can create functional matter separation if you’re disciplined about never mixing matters within a project. Microsoft 365 sensitivity labels add another layer when properly configured, though keep that January 2026 Copilot bug in mind. For firms evaluating purpose-built solutions, Hintyr treats the client matter as exactly this kind of atomic unit. Every document, every AI interaction, every piece of work product is scoped to a specific case with hard security boundaries. That’s different from relying on metadata tags or user discipline. It’s a structural constraint. And it’s priced for the firms that Section III describes, not just the AmLaw 200.
3. Create a written AI use policy. ABA Opinion 512 makes clear that boilerplate engagement letter language won’t cut it for informed consent on AI use. Your clients need to understand what AI tools touch their data and how. Several organizations offer free starting templates. Darrow.ai, Clio, and LeanLaw all publish downloadable AI policy frameworks you can adapt. Don’t start from scratch.
4. Build audit trails. Log who used what AI tool, on which matter, with what input. This isn’t optional paranoia. It’s the screening procedure documentation that Rule 1.0(k) contemplates. Kirk v. First American Title Insurance Co. (Cal. Ct. App. 2010) required “procedures preventing access to relevant files on the computer network.” If you can’t show what your AI tools accessed and when, you can’t demonstrate effective screening.
Document everything. The query, the matter ID, the timestamp, which model processed it. When a disqualification motion lands, you want to produce a clean log, not a shrug. For more on how AI-related malpractice claims are developing, see our analysis of AI malpractice risk for lawyers.
A Practical Checklist for Law Firm Ethical Walls in the Age of AI
The regulatory timeline is accelerating. Colorado’s AI Act takes effect June 30, 2026. The EU AI Act’s high-risk obligations kick in August 2, 2026, classifying legal AI tools under Annex III. Over 300 judges already require AI disclosure in filings. While neither law mandates specific ethical wall architectures, they signal that AI governance in legal practice is becoming a regulatory expectation, not an optional best practice.
And the direction of travel is clear. Andrew Perlman of Suffolk University Law School has argued that attorney competence under Rule 1.1 may eventually require lawyers to use AI, the same way competence today requires knowing how to use email and online research tools. If that view prevails, AI deployment becomes infrastructure rather than optional tooling. You’ll need ethical walls for AI the same way you need conflicts checks for lateral hires.
Pereyra’s “fail closed, not open” principle should guide your design decisions. When an AI system encounters ambiguity about access permissions, it should deny access by default rather than grant it. Most consumer AI tools do the opposite. They’re built to be helpful, which means they default to sharing context, connecting information, and drawing on everything they’ve seen. Great for productivity. Terrible for screening.
Your framework should include these commitments:
- Separate AI sessions or workspaces per client matter
- A written policy that staff have actually read
- Audit logging for every AI interaction with client data
- Informed client consent that specifically addresses AI tool usage
- A plan for reviewing and updating these measures as both the technology and the rules change
The ABA created a duty without providing most practitioners a realistic path to fulfill it. That’s the core tension in AI ethics for lawyers right now. The rules are clear; affordable tools exist, even if they’re imperfect. The clock on enforcement, both regulatory and through malpractice litigation, is running.
Don’t wait for the first major disciplinary action to become your motivation.
Disclaimer: This blog post is published by Hintyr for informational purposes only and does not constitute legal advice. The discussion of ethics rules, case law, and screening procedures is general in nature and may not reflect the rules applicable in your jurisdiction. Attorneys should consult their state bar’s ethics opinions and qualified legal counsel before making decisions about AI adoption, ethical walls, or compliance. Hintyr is not a law firm and does not provide legal services. No attorney-client relationship is created by reading this post.