AI Ethics

AI and Attorney-Client Privilege: What Every Lawyer Must Know

On February 10, 2026, a federal judge ruled that 31 documents a criminal defendant had created with a free AI chatbot were never protected by attorney-client privilege. The same day, a different court reached the opposite conclusion.

Alexander Cohan, Ph.D.

Alexander Cohan, Ph.D.

Computational scientist with a Ph.D. from UC Irvine and peer-reviewed research in NLP, deep learning, and large-scale data modeling. Over a decade of experience building systems that process complex document sets at scale. Founded Hintyr to bring defensible AI workflows to litigation teams navigating document review, redaction, and production.

Key Takeaways

  • These are trial-level decisions with no binding precedential value outside their districts. No appellate court has ruled on AI and privilege.
  • A federal court ruled that documents created with consumer AI tools are not protected by attorney-client privilege or work product doctrine. But the ruling is narrower than the headlines suggest, and another court went the other way the same day.
  • Attorney direction is the single most controllable variable. Judge Rakoff’s dicta in Heppner strongly suggest that counsel-directed AI use could receive privilege protection that independent use will not.
  • Paying for a subscription is not enough. Anthropic’s Claude Pro and Google’s Gemini Advanced are classified as consumer products, not enterprise. Their data policies don’t provide the confidentiality protections Heppner indicates courts will look for.
  • Your clients are already using AI without telling you. If they input privileged communications into consumer AI tools on their own, the resulting records may be discoverable.
  • The standard is forming right now. Firms that build AI governance protocols today will defend their work product more effectively than those waiting for appellate guidance.
Courthouse scales overlaid with AI network nodes representing the intersection of attorney-client privilege and artificial intelligence
Two federal courts reached opposite conclusions on AI and privilege on the same day in February 2026.

What Happened in Heppner

IThe Ruling

Bradley Heppner was in serious trouble. The Dallas financial services executive and former CEO and board chairman of GWG Holdings, Inc., had been arrested in November 2025 on charges of securities fraud, wire fraud, conspiracy, and related charges involving over $150 million in alleged investor losses. He’d retained Quinn Emanuel. But apparently that wasn’t enough.

Before his November 2025 arrest, Heppner opened Anthropic’s free Claude chatbot and started researching his own criminal exposure. He typed in information he’d learned from his attorneys. He generated roughly 31 documents analyzing defense strategies and then shared the AI outputs with his legal team. When the FBI executed a search warrant at his residence, they seized those documents. The government moved to compel, arguing the documents were never privileged.

Here’s what that means in practice.

Judge Jed S. Rakoff, one of the most influential trial judges in the federal system, agreed. Rakoff ruled from the bench on February 10 and issued his 12-page written memorandum on February 17. The opinion applied the three-part privilege test from United States v. Mejia (2d Cir. 2011), requiring (1) a communication between client and attorney, (2) actual confidentiality, and (3) the purpose of obtaining legal advice. Heppner failed on all three.

On the first element, the court was blunt: “Because Claude is not an attorney, that alone disposes of Heppner’s claim of privilege.” Rakoff wrote that all recognized privileges require a “trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline.” No such relationship exists, or could exist, between a user and an AI platform. That finding alone was enough to defeat the claim. Everything that followed was reinforcing analysis.

On confidentiality, the court pointed to Anthropic’s privacy policy, which “provides that [Anthropic] collects data on both users’ ‘inputs’ and the [tool’s] ‘outputs’” and “reserves the right to disclose such data to a host of ‘third parties,’ including ‘government regulatory authorities.’” Heppner could have had “no reasonable expectation of confidentiality.”

On purpose, which Rakoff called “a closer call,” the question was not whether Heppner planned to share the outputs with his lawyers. He did. The question was whether he was seeking legal advice from Claude at the moment he typed his prompts. Claude’s own disclaimer cut against him: “I’m not a lawyer and can’t provide formal legal advice or recommendations.”

The court also rejected work product protection. Defense counsel conceded that Heppner created the documents “on his own volition” and that counsel “did not direct [Heppner] to run Claude searches.” No attorney direction, no work product.

But Rakoff didn’t close the door. He wrote: “[H]ad counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.” That’s a direct reference to the Kovel doctrine (United States v. Kovel, 2d Cir. 1961), which extends privilege to non-lawyer agents acting at counsel’s direction. It’s dicta, not a holding. But it’s a path forward.

Before the panic spreads: this ruling is narrower than the initial coverage suggested. Gibson Dunn stated: “The central takeaway from Judge Rakoff’s ruling is not that AI adoption is incompatible with privilege and work product protections, but that unexamined use of AI tools can create avoidable legal risk.” DLA Piper called it “a more pedestrian proposition: that documents prepared by a non-lawyer, using a public tool that disclaimed an expectation of privacy, were not privileged.”

Heppner doesn’t say AI kills privilege. It says careless, unsupervised, consumer-tier AI use can.

Same Day, Opposite Result

IIHeppner vs. Warner

Heppner was not the only AI privilege ruling that day. In Michigan, Magistrate Judge Anthony P. Patti reached the opposite conclusion in Warner v. Gilbarco, Inc. (E.D. Mich. 2026).

Sohyon Warner, a pro se plaintiff in an employment discrimination case, had used a paid ChatGPT account to answer legal questions and draft filings. Defendants moved to compel production of all AI-related documents, including prompts, outputs, and activity logs.

Judge Patti denied the motion. The court found that defendants’ request “asks the Court to compel Plaintiff’s internal analysis and mental impressions, i.e., her thought process.” He dismissed the motion as a “fishing expedition” and told defendants their “preoccupation with Plaintiff’s use of AI needs to abate.”

Under Sixth Circuit law, work product protection is waived only by disclosure to an adversary or in a manner likely to reach an adversary’s hands. Using an AI platform didn’t meet that test. The court found defendants’ waiver theory was “supported by no case law but only a Law360 article posing rhetorical questions.”

Do these cases actually contradict each other? They addressed different legal questions in different circuits. Heppner involved a represented criminal defendant acting without counsel’s direction, raising a privilege claim in the Second Circuit. Warner involved a pro se litigant, effectively her own counsel, raising a work product claim in the Sixth Circuit. Privilege and work product are separate doctrines with separate waiver standards.

But together, as Morrison & Foerster framed it in their February 2026 alert, they emphasized that “direction by counsel is critical” and that privilege claims “are harder to sustain where AI use is not directed by counsel” – a principle that amounts to a lawyer-in-the-loop standard.

Here’s where the case law actually stands as of early 2026: Heppner denied privilege for unsupervised consumer AI use. Warner, Tremblay v. OpenAI (N.D. Cal. 2024), and the Concord Music Group v. Anthropic orders all recognized privilege or work product protection for attorney-directed AI use, though the court in Concord also found partial waiver for prompts relied upon in litigation filings. The law is more protective than the headlines suggest. But the risk is asymmetric. Get it right and you keep what you already had. Get it wrong, and you may lose privilege over the AI outputs and, some commentators warn, potentially the underlying attorney-client communications that were fed into the tool. That asymmetry is what makes the risk worth managing.

This asymmetry deserves emphasis. Under Federal Rule of Evidence 502(a), disclosure of privileged material can trigger subject matter waiver – meaning not just the AI outputs, but the underlying attorney-client communications that were fed into the tool, could become discoverable. The AI outputs are the surface risk. The underlying communications are the existential one.

The AI Attorney-Client Privilege Gap

IIIConsumer vs. Enterprise

If Heppner teaches one practical lesson, it’s this: the AI platform you choose matters as much as how you use it. And right now, the distinction between consumer and enterprise AI is the most important technical factor in any privilege analysis.

The differences aren’t subtle. Consumer AI tiers, including free ChatGPT, Claude Free, Claude Pro, and Gemini, use your input data for model training by default. Anthropic’s August 2025 update to its consumer terms extended data retention to five years when training is enabled. Google’s Gemini Apps warns users: “Please don’t enter confidential information that you wouldn’t want a reviewer to see.” No consumer tier offers a Data Processing Agreement.

Enterprise tiers tell a different story. ChatGPT Enterprise, Claude for Work/API, Microsoft 365 Copilot, and Gemini for Workspace all exclude customer data from model training by default. They come with SOC 2 compliance, DPAs, custom data retention policies, and restricted employee access. Both Anthropic and OpenAI offer Zero Data Retention agreements for API customers.

Here’s the trap that catches small firms. Anthropic’s Claude Pro plan ($20/month) is classified as consumer, not enterprise, despite being a paid subscription. The same goes for Google Gemini Advanced. Paying for faster responses doesn’t get you a DPA, training exclusions, or contractual confidentiality commitments. Before your firm signs up for any AI tool, confirm the data handling protections in writing.

Anthropic does allow consumer users to opt out of model training, which reduces data retention to 30 days. But an opt-out toggle is not a data processing agreement, and no court has tested whether it changes the confidentiality analysis under Heppner.

As Sidley Austin framed it: “Communicating with an AI platform whose terms expressly permit use or disclosure of information arguably is functionally no different than speaking in the presence of a third party that has announced an intention to use what it hears.”

Debevoise & Plimpton stated their belief that Judge Rakoff “should view the use of an enterprise AI tool (which does not train on inputs and maintains confidentiality of inputs) differently.” That’s encouraging, but no court has tested it yet. And even with enterprise protections, the other elements of privilege must still be met: the communication must involve attorney direction, and the purpose must be legal advice. Enterprise AI addresses the confidentiality problem. It doesn’t solve the other two prongs by itself.

Professor Jonah Perlin of Georgetown Law, writing in Bloomberg Law in August 2025, reinforced the distinction: “simply using a third-party technological tool that gains access to confidential client information doesn’t categorically render the attorney-client privilege waived.” He noted that “courts and legislators have clarified in the context of email and cloud technology, the privilege is only waived if lawyers fail to take reasonable precautions to prevent disclosure.”

Read the terms of service. Know which tier you’re actually on. Document it.

What Your Firm Should Do Now

IVA Six-Step Framework

A privilege waiver in active litigation can cost you the case. These five steps cost almost nothing by comparison, and they reflect the current consensus from case law, ABA Formal Opinion 512, and the wave of firm alerts that followed Heppner.

1. Establish attorney direction as the default

Rakoff’s Kovel dicta made this the most controllable variable: attorney-directed AI use may receive protection that independent client use won’t. If your client needs to use AI in connection with a legal matter, direct them to do so. Document that direction in writing, contemporaneously. As Debevoise advised, clients using AI tools “at the direction of counsel to assist with a legal case should make it clear in their prompts that they are acting at the direction of counsel.” No court has held this preserves privilege yet. But Rakoff strongly suggested it could.

2. Use enterprise platforms for any work involving client information

After Heppner, consumer AI tools are unlikely to satisfy the confidentiality prong under the reasoning of Heppner. Enterprise tiers with contractual no-training commitments, DPAs, SOC 2 certification, and data isolation are the minimum. When evaluating AI document review platforms, require explicit training restrictions, data retention terms, and encryption. An enterprise AI license costs less per month than a single associate’s billable hour. For firms where per-seat enterprise licensing is cost-prohibitive, API-based access to Claude or GPT models offers enterprise-grade data protections at lower per-unit cost.

3. Get informed client consent, and make it specific

ABA Formal Opinion 512 is clear: “a client’s informed consent is required before inputting their confidential information into a self-learning GAI tool” and “for the consent to be informed requires the lawyer’s explanation of the risk, not merely boilerplate provisions in an engagement letter.” WilmerHale, writing about UK legal professional privilege but offering guidance transferable to U.S. practice, in their March 2026 alert recommends that engagement letters “address AI use by the firm, the client and third parties.” The letter is the starting point. The conversation with your client is what makes the consent informed.

4. Build documentation habits now

Privilege logs should identify AI tool usage, specify that it was at counsel’s direction, and note the expectation of confidentiality. Draft prompts to reflect attorney mental impressions, which is what qualifies them for opinion work product protection under Tremblay v. OpenAI and Concord Music Group v. Anthropic, where courts found that “queries crafted by counsel” constitute protected work product.

A practical tip: seek a Federal Rule of Evidence 502(d) protection order at the outset of any case involving AI-assisted review. A 502(d) order binds all federal and state proceedings, not just the parties. It provides that disclosure of privileged information doesn’t constitute waiver. It costs nothing but a stipulation.

5. Educate your clients. Today.

How many of your clients have already typed case details into ChatGPT without mentioning it to you? Some are doing exactly what Bradley Heppner did: inputting privileged communications into free chatbots and creating records that could become discoverable. Tell your clients. Put it in your engagement letters. Raise it at the first meeting. “Don’t input any information about your legal matter into consumer AI tools without talking to us first.” That sentence, delivered early and in writing, could save a case.

6. Adopt a formal AI use policy

Publish an approved tool list, require training for all attorneys and staff who use AI in connection with client matters, and audit compliance periodically. Gibson Dunn and other leading firms recommend formal governance protocols as a baseline, and having one in place strengthens your position if privilege is ever challenged. Several state bars have also issued AI guidance. The California State Bar and NYC Bar Formal Opinion 2024-5 both favor guardrails over categorical restrictions, while Florida Bar Opinion 24-1 focuses on competence and confidentiality duties.

Full disclosure: we build AI document review tools at Hintyr, so we have a stake in this conversation. Our platform uses enterprise infrastructure with zero data retention and no model training on client inputs. We’re not the only option that meets this standard. But whatever you choose, verify the protections are in the contract. You can compare AI document review platforms to evaluate training restrictions, data retention terms, and encryption side by side.

This framework reflects the strongest consensus from current case law and ethics guidance. It’s not a guarantee. The law varies by jurisdiction, no appellate court has weighed in, and the “tool or third party?” divide between Heppner and Warner remains unresolved. These steps reduce risk. They don’t eliminate it.

The answer likely turns on whether AI is treated as a tool (like Google Docs) or a third party (like a consultant). Heppner treated Claude as a third party because it can generate substantive analysis, not just store or transmit information. Warner treated ChatGPT as a tool. Until appellate courts resolve this split, firms should structure their AI use to satisfy either framework.

No appellate court has addressed AI and privilege. The law is still being written at the trial level, and Heppner and Warner show that reasonable jurists will disagree until Congress or the circuits step in. But the practical direction is clear. Courts will apply traditional privilege principles to AI, and the firms that build their practices around those principles today will be the ones whose work product survives challenge. The AI malpractice risks are already real. The privilege risks are catching up.

Frequently Asked Questions

Can using a free AI chatbot waive attorney-client privilege?

Yes, based on current case law. In United States v. Heppner (S.D.N.Y. 2026), a federal court ruled that 31 documents a client generated using a consumer AI chatbot were not protected by attorney-client privilege. The court found that the AI is not an attorney, and the platform’s privacy policy permitted disclosure to third parties including government authorities. This is a trial-level decision and has not been tested on appeal.

What is the difference between consumer and enterprise AI for privilege?

Consumer AI tiers use input data for model training by default and offer no data processing agreements. Enterprise tiers exclude customer data from training and provide contractual confidentiality protections. After Heppner, enterprise platforms with contractual confidentiality commitments appear better positioned to satisfy the privilege test’s confidentiality prong, though no court has directly ruled on this distinction.

Does paying for an AI subscription protect attorney-client privilege?

Not necessarily, based on the reasoning in Heppner. Paid consumer subscriptions like Anthropic’s Claude Pro and Google’s Gemini Advanced are still classified as consumer products, not enterprise. Their data policies do not provide the contractual confidentiality commitments that courts are likely to look for in a privilege analysis.

What does ABA Formal Opinion 512 say about using AI with client data?

ABA Formal Opinion 512 (July 2024) requires lawyers to obtain informed consent from clients before inputting confidential information into any AI tool that trains on user data. The consent must include specific explanation of AI-related risks, not merely boilerplate engagement letter language.

Disclaimer: This article discusses U.S. federal and state law. Attorney-client privilege and work product doctrine vary across jurisdictions. Readers outside the United States, or practitioners handling cross-border matters, should consult jurisdiction-specific guidance. This post is published by Hintyr for informational purposes only and does not constitute legal advice. Consult qualified counsel for guidance on your firm’s specific circumstances. No attorney-client relationship is created by reading this post.

Protect Privilege with AI You Can Verify.

Hintyr builds document review tools with enterprise data protections that support privilege claims, with every finding linked to its source document.