TAR Validation

Last updated: 2026-03-23

TAR validation statistically measures how well your document review is performing. Run a Control Set (L1) to check precision, an Elusion Test (L2) to confirm you're hitting recall targets, or both together to build a defensible record of review quality.

TAR Validation dialog

Create and continue validation tests from the Case Menu

Select Tag

Validation Test Name

Validation Type

Confidence Level

Margin of Error

Target Recall

What is technology-assisted review validation

Without validation, you have no way to prove your review was thorough. TAR validation fixes that. It draws random samples from your tagged document populations and asks human reviewers to grade them independently. The results produce metrics like precision, recall, and elusion rate that quantify review quality. Courts have recognized this approach as a cornerstone of defensible review.

Hintyr supports two validation levels you can run independently or together. Both use the same grading workflow: reviewers mark each sampled document as Responsive or Not Responsive.

Control Set and Elusion Test validation levels

  • L1 - Control Set - Samples randomly from documents tagged as responsive. Reviewers grade each one to measure precision (how many tagged documents truly are responsive) and recall (how many truly responsive documents were found).
  • L2 - Elusion Test - Samples randomly from documents not tagged as responsive (the discard pile). Reviewers grade each one to measure the elusion rate, which shows how many responsive documents were missed. This confirms whether you've met your recall target.

You can choose to run L1 only, L2 only, or both together in a single validation test. Both levels share the same statistical configuration and are graded through the same interface.

How TAR validation works in Hintyr

The workflow has three stages. First, you create a validation test by selecting a tag, naming the test, choosing L1 and/or L2, and setting your statistical parameters. Second, Hintyr draws a random sample and presents documents one at a time in the Grading Panel, where reviewers mark each as Responsive or Not Responsive. Third, once all samples are graded, Hintyr calculates final statistics and reports whether the validation passed or failed against your targets.

You'll find validation tests under TAR Validation in the Case Menu. The dialog has two tabs: Create Validation Test for starting new tests, and Continue Test for resuming or reviewing existing ones.

In this section

Frequently asked questions

What is TAR validation?
TAR validation is a statistical quality-control process that measures how well your technology-assisted review is performing. It draws random samples from tagged document populations and has human reviewers grade them to calculate metrics like precision, recall, and elusion rate. Courts have accepted these metrics as evidence of a defensible review process.
When should I run a validation?
Run a validation after your review team has tagged a meaningful number of documents as responsive or not responsive. L1 (Control Set) is typically run to measure precision during or after review. L2 (Elusion Test) is run after review to confirm that your recall target has been met before producing documents under FRCP Rule 34.
What is the difference between L1 and L2?
L1 (Control Set) samples from documents tagged as responsive to measure precision and recall. L2 (Elusion Test) samples from the discard pile to measure elusion rate and validate recall. You can run them independently or together.
Can I run multiple validations on the same tag?
Yes. You can create as many validation tests as needed for any tag. This is useful when you want to re-validate after additional review work or compare results across different stages of your review.

Related articles