Evidence Rules

Rule 707 Explained: How AI Evidence Meets the Daubert Standard

A federal advisory committee voted 8-1 to create the first evidence rule for machine-generated proof. Here is what it means for your practice, your expert budget, and your trial strategy.

Alexander Cohan, Ph.D.

Alexander Cohan, Ph.D.

Computational scientist with a Ph.D. from UC Irvine and peer-reviewed research in NLP, deep learning, and large-scale data modeling. Over a decade of experience building systems that process complex document sets at scale. Founded Hintyr to bring defensible AI workflows to litigation teams navigating document review, redaction, and production.

Proposed Federal Rule of Evidence 707 applying Daubert reliability standards to AI-generated evidence

What Proposed Federal Rule of Evidence 707 Actually Says

On May 2, 2025, the Advisory Committee on Evidence Rules voted 8-1 to propose something federal courts have never had: a rule written specifically for AI-generated evidence. Proposed Federal Rule of Evidence 707 would require machine-generated evidence offered without an expert witness to satisfy the same reliability standards that govern expert testimony under Rule 702. The sole dissenter was the Department of Justice.

The rule itself is two sentences:

“When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)–(d). This rule does not apply to the output of simple scientific instruments.”

Four requirements sit behind that cross-reference to Rule 702(a) through (d). The proponent must show the evidence helps the trier of fact, rests on sufficient facts or data, was produced by reliable principles and methods, and reflects a reliable application of those methods to the case at hand.

If those factors sound familiar, they should. They are the Daubert reliability framework that federal courts have applied to expert testimony since 1993. Rule 707 extends that framework to machine outputs offered without any expert on the stand.

One phrase deserves close attention: “without an expert witness.” If you bring a live expert to testify about AI-generated evidence, Rule 702 still governs as it always has. Rule 707 closes a narrower gap: the situation where a party tries to introduce AI output as evidence with no expert at all.

One distinction worth making early: Rule 707 targets evidence offered at trial. If you use AI for document review, predictive coding, or technology-assisted review during discovery, those tools are governed by proportionality under Rule 26(b)(1), not Rule 707. The boundary between discovery tools and trial evidence is not fully settled. But the rule’s scope is narrower than the headlines suggest.

The Advisory Committee removed an earlier exemption for “routinely relied upon commercial software” before the June 2025 Standing Committee meeting. The concern was blunt: that language could exempt ChatGPT output from coverage. What remains is a single carve-out for “the output of simple scientific instruments” like thermometers and electronic scales.

The rule is not law yet. The public comment period closed February 16, 2026, with comments from Lawyers for Civil Justice, Exxon Mobil, and the Federal Magistrate Judges Association. The Advisory Committee’s final vote is expected in May 2026. If approved and transmitted through the Judicial Conference, Supreme Court, and Congress by May 2027, the earliest effective date is December 1, 2027. But courts are already moving in this direction without waiting for the rule.

How the Machine-Generated Evidence Daubert Standard Works

The Daubert trilogy (Daubert v. Merrell Dow, 1993; Joiner, 1997; Kumho Tire, 1999) established trial judges as gatekeepers for scientific evidence. Under Daubert, courts evaluate whether a methodology can be tested, whether it has been peer-reviewed, what its error rate is, and whether the relevant scientific community accepts it. Rule 707 takes those same questions and points them at AI.

Applied to machine-generated evidence, Rule 702(a) through (d) translates into specific demands. Has the AI model been validated in circumstances similar to your case? Is the training data representative of the population at issue? Can you identify the error rate? Has the methodology been published or reviewed by independent researchers?

The 2023 amendments to Rule 702 set the stage. Those amendments tightened the gatekeeping standard by requiring the proponent to demonstrate reliability “by a preponderance of the evidence” and by emphasizing that an expert’s opinion must stay within the bounds of what the methodology supports. Courts have already used that strengthened standard to exclude AI-related evidence.

In Kohls v. Ellison (D. Minn. Jan. 2025), a Stanford professor submitted a declaration drafted with LLM assistance that cited two nonexistent academic articles. Both were fabricated. The court excluded it, finding the expert’s “unchecked use of generative AI” shattered his credibility. Signing a declaration under penalty of perjury, the court stressed, “is not a mere formality.” The malpractice risks of unchecked AI use are already real.

In Matter of Weber (N.Y. Surr. Ct. 2024), a damages expert used Microsoft Copilot to check calculations but could not recall his prompts or explain how the tool worked. The court required mandatory AI disclosure and a Frye hearing. This was the first case to require both.

Some challenges succeed. In United States v. Anderson (3d Cir. 2026), the Third Circuit upheld TrueAllele probabilistic genotyping under Daubert, finding a false-positive rate of 0.005% compared to 2-6% for manual human review. Forty-two validation studies backed the methodology. That is what a successful Daubertfoundation looks like.

And Ferlito v. Harbor Freight Tools (E.D.N.Y. Apr. 2025) refused to exclude an expert who used an LLM to “double-check his findings.” The distinction matters. Using AI to verify your work is defensible. Letting AI produce your conclusions is not.

Judges Paul Grimm and Professor Maura Grossman, in their public comment supporting Rule 707, argued that borrowing from Rule 702 “makes sense because it has been strengthened by its recent amendment and its factors are well known to both judges and lawyers.” The test is familiar. The application to AI is new.

What Counts as a “Simple Scientific Instrument” Under Rule 707?

The last sentence of Rule 707 exempts “the output of simple scientific instruments.” The Committee Note lists thermometers, electronic scales, and battery-operated digital thermometers. Breathalyzers and radar guns would likely qualify too.

Everything else is contested.

GPS data from trucking companies. Electronic logging devices. Fetal monitoring strips. Blood lab results. These are instruments that hospitals and transportation companies rely on daily. Whether they count as “simple” under Rule 707 is an open question. The Committee initially used the word “basic” and switched to “simple” at the Standing Committee’s suggestion, but neither term has a definition in the rule.

DNA analysis software sits firmly outside the exception. The Advisory Committee made clear that courts cannot “take judicial notice of the output of DNA software that has been admitted in hundreds of prior cases.” And an AI chatbot drafting a Rule 26 report? Definitely not a simple scientific instrument.

Professor Andrea Roth argued the exception should be deleted entirely because “even a digital thermometer is not ‘simple’; unlike a sextant or barometer of old, modern instruments are computerized and far beyond most peoples’ ability to understand.” The Committee rejected her proposal and an alternative that would have exempted instruments “accessible to, and the extent of its reliability is known to, the general public” as too vague.

Expect the boundary fights to start before the admissibility fights. When opposing counsel introduces output from a medical device, a fleet management system, or a building access log, the first question will be whether Rule 707 applies at all.

How Rule 707 Changes Litigation Costs for Small Firms

Here is where Rule 707 hits your budget.

SEAK’s 2024 Expert Witness Fee Study found a median hourly rate of $450 for file review, $475 for depositions, and $500 for courtroom testimony. AI and cybersecurity specialists command $500 to $1,500 per hour. Average retainers run $3,546, and 68% of experts raised their rates in the last five years.

Quinn Emanuel spelled out the chain reaction. First, discovery to test the four admissibility elements. Then disputes over disclosure of AI tool methodology, confidentiality battles over proprietary algorithms, and opposing experts retained to challenge reliability. The firm went further, predicting that “the term ‘Rule 707 hearing’ may be coined, as was the term ‘Markman hearing.’”

For a 10-attorney firm handling a $200,000 commercial dispute, the math gets uncomfortable. If you want to introduce AI-generated contract analysis, timeline reconstruction, or predictive analytics without a live expert, you need to build a Daubert foundation for the AI tool itself. That means documenting the training data, identifying the error rate, producing validation studies, and being prepared for a hearing. A rough estimate: $15,000 to $40,000 in expert fees alone, before you account for attorney time preparing the motion and briefing.

The ABA Task Force on AI put it bluntly in its December 2025 Year 2 Report: a “growing stratification between technology ‘haves’ and ‘have-nots.’” The drivers are licensing costs, infrastructure demands, and a shortage of staff with technical expertise to deploy AI. If you run a small firm handling eDiscovery with AI tools, that warning is about you.

But the cost argument cuts both ways. If you are on the defense side, Rule 707 gives you a built-in challenge to AI evidence your opponent introduces. You do not need to file a separate Daubert motion or retain your own expert just to demand a reliability hearing. The rule shifts the burden to the proponent. For a plaintiff’s firm facing a well-resourced defendant who introduces black-box algorithmic output, Rule 707 is protection, not a burden.

The DOJ’s dissent deserves a fair hearing here. Elizabeth Shapiro argued that Rule 702 already covers machine-generated evidence and that Rule 707 “only seeks to predict and regulate future needs.” She was not alone. Lawyers for Civil Justice recommended suspending consideration of the rule entirely, arguing existing rules may suffice. She may be right that the current approach is sufficient. But that approach produced inconsistent results: ShotSpotter evidence admitted in Pennsylvania (Commonwealth v. Weeden), excluded in California (People v. Hardy), sent back for hearings in Massachusetts (Commonwealth v. Rios). A uniform rule creates predictability. Predictability reduces litigation over litigation.

The Deepfake Authentication Rule Waiting in the Wings

Rule 707 was not the only AI evidence proposal on the Advisory Committee’s agenda. The committee also developed a proposed Rule 901(c) for deepfake authentication, then decided not to publish it for comment.

Under the draft 901(c), a party challenging evidence as an AI-generated deepfake would first need to present enough evidence to support a finding of fabrication. If that threshold is met, the proponent would have to demonstrate the evidence is “more likely than not authentic.” The rule would apply to evidence offered under both Rule 901 and Rule 902.

The committee chose a wait-and-see approach. Their reasoning: deepfakes are “a sophisticated form of video or audio generated by AI,” and “forgery is a problem that courts have long had to confront, even if the means of creating the forgery and the sophistication of the forged evidence are now different.”

Not everyone agrees courts have the tools to handle this. In Mendones v. Cushman & Wakefield (Cal. Super. Ct.), self-represented plaintiffs submitted a deepfake video as witness testimony. Judge Kolakowski spotted it because the witness’s face was “nearly motionless” with “strange cuts and apparent repetition of mannerisms.” Judges will not always catch what Judge Kolakowski caught.

The committee kept draft 901(c) language “in the bullpen,” ready for rapid implementation. The question is not whether a deepfake rule will be needed. It is when.

How to Prepare Your Practice for Rule 707 Now

The Advisory Committee votes on Rule 707’s final form in May 2026. Whether or not the rule is adopted on schedule, the direction is clear: courts want to understand how AI-generated evidence was produced, and they will penalize parties who cannot explain it. Here is how to prepare.

Document your AI use now. Every time you or your experts use AI tools for case analysis, calculations, or document review, record the tool, the version, the prompts, and the outputs. In Matter of Weber, the expert’s inability to recall his prompts was fatal. Build the habit before a court requires it. Hundreds of federal and state courts already require AI disclosure in filings.

Update your expert retention agreements. Add language requiring experts to disclose any AI tools used in preparing their analysis, retain all prompts and inputs, and be prepared to testify about the AI methodology. Your duty to supervise AI-generated work product extends to your retained experts. If your expert cannot explain how the AI reached its conclusions, neither can you.

If you plan to introduce AI-generated evidence at trial, include expert witness costs in your case budget from the start. The Committee Note’s warning that reliability will often be “impossible to meet without presenting expert testimony” is as close to a cost projection as the committee will give you. Do not assume you can skip the expert and rely on the AI output alone.

Know when 707 helps you. On the challenge side, Rule 707 hands you a ready-made objection when opposing counsel introduces AI output without an expert. Even before the rule takes effect, the Daubert factors and cases like Hardy and Anderson give you the grounds to demand reliability hearings for AI evidence. The proportionality standards under Rule 26 already shape how courts evaluate AI in discovery; Rule 707 extends that scrutiny to trial evidence.

Watch the states. Louisiana became the first state to require disclosure of AI-generated evidence (Act 250, effective August 2025). California, New York, and Florida have their own proposals moving through committees. Federal Rule 707 will not be the last word.

For firms already using AI to accelerate document review and case assessment, Rule 707 is not a reason to stop. It is a reason to document your workflows and be ready to defend them. Tools that generate statistical validation reports, including control-set and elusion-test validation, produce the error rates and recall metrics a court will demand. The firms that build that habit now will spend less time in hearings later.

Disclaimer: This blog post is published by Hintyr for informational purposes only and does not constitute legal advice. The discussion of evidence rules, case law, and proposed amendments is general in nature and may not reflect the rules applicable in your jurisdiction. Attorneys should consult their state bar’s ethics opinions and qualified legal counsel before making compliance decisions. No attorney-client relationship is created by reading this post.

Admissibility starts with how you review.

Rule 707 will scrutinize how AI-generated evidence was produced and validated. Hintyr gives solo and small firm attorneys an AI-powered document review workflow built for auditability and defensibility.