Technology-Assisted Review (TAR) is a process that uses machine learning algorithms to classify documents as relevant or non-relevant in e-discovery. It reduces the volume of documents requiring manual human review, lowering costs and improving consistency across large document sets.
What is Technology-Assisted Review?
Technology-Assisted Review, commonly called TAR, uses machine learning to prioritize and classify documents during e-discovery. Instead of having human reviewers examine every document in a collection, TAR trains an algorithm on a sample of human-reviewed documents. The algorithm then applies what it's learned to predict relevance across the remaining population.
TAR was first approved by a U.S. court in Da Silva Moore v. Publicis Groupe (S.D.N.Y. 2012), a landmark ruling that established TAR as a defensible review methodology. Since then, courts across the United States, the United Kingdom, and Ireland have recognized TAR as reasonable and proportionate under applicable discovery rules. The Sedona Conference has published extensive guidance endorsing TAR when properly implemented and validated.
There are two primary TAR approaches. TAR 1.0 (also called simple passive learning) uses a fixed training set: a subject matter expert reviews a seed set, the algorithm trains on those decisions, and then ranks the entire population. TAR 2.0 (continuous active learning) integrates human review decisions in real time. The model updates continuously as reviewers work through the collection. TAR 2.0 is generally considered more efficient because the algorithm improves with every decision.
"Parties may by order of the court or agreement use technology-assisted review to reduce the burden of discovery, provided the methodology is transparent and the results are validated."-- FRCP Rule 26(b)(1) proportionality standard, as applied in Da Silva Moore v. Publicis Groupe (S.D.N.Y. 2012)
Key facts
- Studies by Maura Grossman and Gordon Cormack found that TAR consistently achieves recall rates of 75-85%, often outperforming exhaustive manual review.
- The RAND Institute for Civil Justice estimated that TAR can reduce document review costs by 50-80% compared to manual review alone.
- FRCP Rule 26(b)(1) establishes the proportionality standard that supports TAR adoption by balancing the needs of the case against the burden of review.
- The Sedona Conference Commentary on TAR (2014) provides widely cited best practices for implementing and defending TAR workflows.
TAR in Hintyr
Hintyr supports TAR through its TAR validation workflow. You can create a validation test, select a tag containing your reviewed documents, and configure statistical parameters including confidence level, margin of error, and target recall. Hintyr then draws a statistically valid random sample for human grading.
The validation workflow supports both TAR 1.0 (Control Set) and TAR 2.0 (Elusion Testing) approaches. Reviewers grade sample documents as responsive or not responsive through a dedicated grading panel, and Hintyr calculates precision, recall, and elusion rate to measure agreement between human reviewers and AI predictions. You can create a validation and begin grading samples directly from the Case Menu.
Frequently asked questions
What is the difference between TAR 1.0 and TAR 2.0?
Is TAR accepted by courts?
Does TAR replace human review entirely?
How do I validate TAR results in Hintyr?
Related terms
- Predictive Coding
- ESI (Electronically Stored Information)
- Privilege Review
- Review Platform
- Agentic Review