E-Discovery

Proportionality Arguments with AI:
A FRCP Rule 26 Playbook

AI has cut e-discovery document review costs by 20 to 70 percent, but most litigators still argue proportionality under Rule 26 using manual review prices. Here are the talking points and cost exhibits that fix that.

Alexander Cohan, Ph.D.

Alexander Cohan, Ph.D.

Computational scientist with a Ph.D. from UC Irvine and peer-reviewed research in NLP, deep learning, and large-scale data modeling. Over a decade of experience building systems that process complex document sets at scale. Founded Hintyr to bring defensible AI workflows to litigation teams navigating document review, redaction, and production.

AI-powered proportionality analysis under FRCP Rule 26 for e-discovery
AI cost data is rewriting the proportionality calculus in federal discovery disputes.

The AI Proportionality Argument You’re Not Making

The Cost Problem

Every litigator knows the proportionality fight. You’ve had it.

But here’s the problem with the numbers most lawyers use: they’re based on manual review.

The RAND Corporation’s landmark 2012 study found that document review alone consumed 73% of all e-discovery production costs across 57 large-volume cases at Fortune 200 companies. Processing took 19%. Collection took 8%. The review line item dwarfed everything else. And the per-document cost for that review? Published estimates still land between $0.50 and $1.50 per document, with the ComplexDiscovery Winter 2026 eDiscovery Pricing Survey confirming that 30% of respondents report onsite managed review costs of $0.50 to $1.00 per document and another 22% exceeding $1.00.

Those numbers are real. They’re also, for an increasing number of cases, optional.

FRCP Rule 26(b)(1) requires discovery to be “proportional to the needs of the case,” weighed against six factors including whether the burden or expense outweighs the likely benefit.

The Advisory Committee Note to the 2015 amendments instructs courts to consider technology that reduces discovery costs, a direct textual hook for AI-assisted review.

That creates a two-sided pressure point that changes proportionality arguments from both directions.

If you claim excessive burden, your opponent can point to AI tools and ask why you are quoting manual review costs. Judge Peck said in Hyles v. New York City (2016) that “for most cases today, TAR is the best and most efficient search tool.” In In re Mercedes-Benz Emissions Litigation (D.N.J. 2020), Special Master Cavanaugh warned he would “not look favorably on any future arguments related to burden of discovery requests, specifically cost and proportionality” from a party that refused TAR.

Four E-Discovery Rulings That Changed the Cost Calculation

The Case Law

Between 2012 and 2020, four opinions built the foundation for AI-proportionality arguments. Together, they give you a template for turning cost data into a proportionality exhibit.

Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012). This is the starting line. Magistrate Judge Andrew Peck issued the first judicial opinion approving technology-assisted review, telling the bar that counsel “no longer have to worry about being the ‘first’ or ‘guinea pig’ for judicial acceptance of computer-assisted review.”

But the opinion did more than bless a technology. Judge Peck tied TAR directly to proportionality, writing that the goal is for the review method to produce “higher recall and higher precision than another review method, at a cost proportionate to the ‘value’ of the case.” That sentence connected AI review tools to Rule 26(b)(1)’s sixth factor before the 2015 amendments even elevated proportionality to the scope definition.

Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125 (S.D.N.Y. 2015). Three years later, Judge Peck moved TAR from novelty to orthodoxy: “it is now black letter law that where the producing party wants to utilize TAR for document review, courts will permit it.” For proportionality practice, the most useful part is the methodology holding. The producing party selects predictive coding unilaterally; the requesting party can’t dictate the search method.

Judge Peck also warned against a double standard: “Doing so discourages parties from using TAR for fear of spending more in motion practice than the savings from using TAR for review.” When opposing counsel challenges your AI methodology, Rio Tinto is the authority that places the burden where it belongs.

Dynamo Holdings Ltd. P’ship v. Commissioner, 143 T.C. No. 9 (2014); No. 2685-11 (T.C. July 13, 2016) (order). If you want a single case to anchor a cost exhibit, this is the one. Expert testimony showed that TAR would cost $80,000 to $85,000, while the IRS’s preferred manual review would run $500,000 to $550,000.

That ratio gave the court a concrete basis for approving predictive coding. In the 2016 follow-up, the court went further, rejecting two “myths”: that manual review is the “gold standard” for accuracy, and that any discovery response can or should be perfect. When you file your next proportionality brief, Dynamo Holdings shows exactly what a cost comparison exhibit should look like: two columns, two dollar figures, one obvious conclusion.

Lawson v. Spirit AeroSystems, Inc., No. 18-1100-EFM-ADM, 2020 WL 3288058 (D. Kan. 2020). Lawson is the cautionary counterpart. The plaintiff insisted on TAR after traditional keyword searches had already been completed. The court allowed TAR to proceed on roughly 322,000 documents, but when results proved marginal (only 3.3% responsive), Magistrate Judge Mitchell found that “the ESI/TAR process became disproportionate to the needs of the case” and shifted $754,029.46 in expenses to the plaintiff.

The lesson cuts both ways. AI-driven review is a proportionality tool, not a proportionality shield. If you push for additional AI review that yields little beyond what existing methods produced, you own the bill. Your cost exhibit must show that AI review is cheaper and that the discovery it produces is worth the expense.

Talking Points for Your Rule 26 Proportionality Motion

The Template

You don’t need to draft from scratch. The five arguments below give you a working outline for any motion where AI review changes the proportionality math under Rule 26(b)(1). Adapt the phrasing to your facts, plug in your numbers, and file.

1. The cost baseline is wrong. Opposing counsel’s burden estimate assumes every document will be read by a human reviewer at $0.50 to $1.50 per document. That assumption ignores the current state of document review technology. Technology-assisted review, now “black letter law” for over a decade, reduces per-document review costs by 20 to 70 percent. The court should evaluate burden against the actual cost of review using available technology, not against a manually inflated baseline.

2. Revised costs make this discovery proportional. When the actual cost of review is calculated using technology-assisted methods, the burden drops from the manual estimate to a fraction of that figure. Under this corrected number, the discovery satisfies every factor in Rule 26(b)(1). The amount in controversy exceeds the review expense. The information is uniquely within the producing party’s control, satisfying the “relative access” factor. And the reduced burden no longer “outweighs the likely benefit” of the discovery.

In Oxbow Carbon & Minerals LLC v. Union Pacific Railroad Co., 322 F.R.D. 1 (D.D.C. 2017), the court compelled production costing $85,000 to $142,000 in a $50 million antitrust dispute, finding those costs proportional to the stakes.

3. AI recall matches or exceeds manual review. Any suggestion that AI-assisted review sacrifices quality for speed is contradicted by the evidence. Continuous active learning (CAL), the current generation of technology-assisted review, achieves recall rates of 90 to 96%, compared with manual review’s documented recall of just 50 to 70% (Grossman & Cormack, 2011 and 2014).

The producing party’s reviewers aren’t just slower; they’re less accurate. The Dynamo Holdings court debunked the “myth” that manual review is the “gold standard by which all searches should be measured.”

4. Cost-shifting should reflect AI-available costs. If the court considers cost-shifting, the relevant baseline is the cost of review using reasonably available technology, not the inflated cost of a deliberately inefficient method. In Lawson v. Spirit AeroSystems, the court shifted $754,029 in TAR costs to the requesting party after the review yielded only 3.3% responsive documents of marginal relevance.

The case cuts both ways: it shows courts will hold requesting parties accountable for disproportionate demands, but it also establishes that AI review costs (not manual costs) set the baseline in cost-shifting math.

5. The Advisory Committee told courts to consider exactly this. The 2015 Note to Rule 26(b)(1) instructs courts and parties to “consider the opportunities for reducing the burden or expense of discovery as reliable means of searching electronically stored information become available.” AI-assisted review is the “reliable means” the Committee anticipated. This isn’t a creative argument; it’s a direct application of the Committee’s own instruction. As of 2026, 40 states, the District of Columbia, and Puerto Rico have adopted the ABA’s duty of technology competence under Model Rule 1.1, Comment 8.

These five arguments build on each other. Filed as a package, they give the court a clear analytical path from burden to reasonableness.

Two caveats. First, the case law above covers TAR (predictive coding based on supervised machine learning). No court has squarely approved generative AI (large language models) for responsiveness determinations. GenAI error profiles differ from TAR’s, and standard QC sampling protocols may need adaptation. If your tool is LLM-based, the proportionality principle still applies, but you should be prepared to explain your methodology, run your own validation sampling, and document your AI disclosure obligations. ABA Formal Opinion 512 (July 2024) now requires competence with GenAI tools specifically, and validation sampling protocols like Broiler Chicken apply regardless of the underlying technology.

Second, be aware of the boomerang effect. If AI makes review cheap, opposing counsel can argue that the court should order more discovery, not less. Proportionality is a multi-factor test. Even cheap discovery can be disproportionate if the information sought is of marginal relevance or the issues don’t warrant it. Lawson’s $754,029 cost-shift is the proof: AI review was affordable, but the results weren’t worth the expense.

Building an AI Cost Exhibit the Court Will Credit

The Numbers

Judges rule on evidence, not assertions. If you want a court to credit your proportionality argument, you need a cost exhibit that puts manual and AI review side by side, with sourced figures and a clear methodology description.

Start with manual review cost estimates and build a comparison column.

TAR cost reductions range from 20% to 70% depending on case characteristics. Conservative practitioner estimates center around 30 to 35% savings. Either end of that range changes the proportionality math. A Lighthouse Global case study documented $6.2 million in savings on a single matter using TAR workflows, delivered 50% under the original budget.

Add the accuracy line. This is where the exhibit gains credibility. Manual review achieves 50 to 70% recall, per TREC results and multiple academic studies. Continuous active learning achieves 90 to 96% recall with 80 to 96% precision (Grossman & Cormack, 2011 and 2014). Your cost exhibit should make this comparison explicit: the cheaper method is also the more accurate one. Courts have noticed. In Dynamo Holdings, the Tax Court credited TAR at $80,000 to $85,000 over manual review at $500,000 to $550,000, finding the TAR approach constituted a “reasonable inquiry.”

Include a time comparison. Take the 100,000-document example. At a standard rate of 50 to 100 documents per hour per reviewer, manual review requires weeks of work. An AI-assisted initial pass can process the same volume in hours, with human review then focused on the smaller set flagged as potentially responsive. EDRM estimates suggest reviewing 130,000 documents might take 27 days manually versus roughly 5 days with generative AI tools.

Your exhibit should include validation sampling data. The In re Broiler Chicken protocol is still the benchmark protocol: 500 responsive documents, 500 non-responsive documents, and 2,000 unreviewed documents combined into a blind validation set. Citing this protocol in your exhibit tells the court you’re following an accepted methodology, not asking it to trust a black box. The 2025 In re Insulin Pricing ruling (D.N.J.) accepted a party’s targeted recall rate of 70% or higher as “reasonable and proportional,” while noting that even lower recall doesn’t necessarily indicate an inadequate review.

Present it properly. Attach the cost exhibit with a declaration describing your AI review methodology, validation results, and quality controls. If your production workflow generated audit logs or recall estimates, attach those too.

The strongest proportionality arguments in 2026 aren’t rhetorical. They’re mathematical. A well-sourced cost exhibit gives the court a concrete basis for ruling, rather than competing assertions about burden. And when the numbers show that AI review costs less and finds more, the proportionality math supports your position.

Disclaimer: This blog post is published by Hintyr for informational purposes only and does not constitute legal advice. The discussion of FRCP Rule 26, case law, and proportionality analysis is general in nature and may not reflect the rules applicable in your jurisdiction. Attorneys should consult qualified legal counsel before making discovery strategy decisions. No attorney-client relationship is created by reading this post.

Make the proportionality argument. Back it with data.

Hintyr tracks per-document review cost, time-to-completion, and recall metrics automatically. When you need to argue that AI-assisted review is proportional (or that manual review is not), the numbers are captured as you work.