TAR validation statistically measures how well your document review is performing. Run a Control Set (L1) to check precision, an Elusion Test (L2) to confirm you're hitting recall targets, or both together to build a defensible record of review quality.
What is technology-assisted review validation
Without validation, you have no way to prove your review was thorough. TAR validation fixes that. It draws random samples from your tagged document populations and asks human reviewers to grade them independently. The results produce metrics like precision, recall, and elusion rate that quantify review quality. Courts have recognized this approach as a cornerstone of defensible review.
Hintyr supports two validation levels you can run independently or together. Both use the same grading workflow: reviewers mark each sampled document as Responsive or Not Responsive.
Control Set and Elusion Test validation levels
- L1 - Control Set - Samples randomly from documents tagged as responsive. Reviewers grade each one to measure precision (how many tagged documents truly are responsive) and recall (how many truly responsive documents were found).
- L2 - Elusion Test - Samples randomly from documents not tagged as responsive (the discard pile). Reviewers grade each one to measure the elusion rate, which shows how many responsive documents were missed. This confirms whether you've met your recall target.
You can choose to run L1 only, L2 only, or both together in a single validation test. Both levels share the same statistical configuration and are graded through the same interface.
How TAR validation works in Hintyr
The workflow has three stages. First, you create a validation test by selecting a tag, naming the test, choosing L1 and/or L2, and setting your statistical parameters. Second, Hintyr draws a random sample and presents documents one at a time in the Grading Panel, where reviewers mark each as Responsive or Not Responsive. Third, once all samples are graded, Hintyr calculates final statistics and reports whether the validation passed or failed against your targets.
You'll find validation tests under TAR Validation in the Case Menu. The dialog has two tabs: Create Validation Test for starting new tests, and Continue Test for resuming or reviewing existing ones.