Ever since Relativity announced their new GenAI aiR suite of tools based on OpenAI's GPT-4 model, there has been plenty of speculation on how these new tools will work and how we can best incorporate them into our everyday eDiscovery practices. Today we will be looking at some of the emerging use cases for the first of this suite of tools - Relativity's aiR for Review.
How can aiR for Review be used?
- Boosting efficiencies in review
- Responsiveness review
- QC human review
- Train your Active Learning model
- Replacing issue coding
Boosting efficiencies in review:
aiR for Review leverages a 'bottoms up' approach, meaning that it analyses each document as opposed to mass glazing over a group of documents. This means that the tool is ideally suited to supporting reviewers with your document review processes. You can choose how you utilise the tool, but the two most common uses can be found below...
Responsiveness review(AI reviews, humans QC)
The first and most common way we see the aiR for Review tool being used is to supplement human reviewers at the first-pass review stage. This is often the most time-consuming and tedious part of the review process and is therefore perfect for machine intervention. aiR for Review can work alongside your reviewers to speed up their review time and reduce the burden on your team.
Based on your review protocol, the tool can be trained on a small number of documents and run across your workspace at the beginning of the review process. The tool will identify likely hot, likely relevant, unlikely hot and unlikely relevant documents that will allow your reviewers to prioritise the documents they look at throughout their review.
The grounding of the tool in quotations also allows the reviewers to see at a glance if the pulled quotation is relevant to the matter. When looking at the document in the viewer, the quotation is highlighted to enable the reviewers to easily locate the most important parts of a document.
Throughout this process, the reviewers are still responsible for monitoring the AI algorithm and ensuring that the documents it identifies as pertinent are relevant to the case.
QC human review(human reviews, AI QC)
In the case where your client is more sceptical of AI technology, you could use the aiR for Review tool as a QC method instead of having it play an integral role in the review. This allows you to benefit from the technology, and introduce your clients to the technology in a less dependent way, allowing them to build trust with the tool.
You could run the tool over discarded documents/as a final check over documents marked as "not relevant" to make sure your reviewers have correctly understood their training material. You could also use the tool to analyse documents remaining in your 'unreviewed' pile in an active learning project. Alongside your statistical analysis, you could run the aiR for Review tool over the remaining documents to quickly identify any remaining 'likely relevant' documents. This could save a lot of time that would otherwise be spent running your elusion test and deliberating over the existence of more potentially relevant documents. With aiR for Review, you can know in a matter of minutes how many likely relevant documents you have remaining to review.

Training your Active Learning model:
The results of the aiR for Review tool will be brilliant training material for an Active Learning algorithm. Considering we would typically advise training your algorithm on 100 high-quality relevant documents and 100 definitely not-relevant documents to give the algorithm a strong grounding, using the results of the aiR for review tool to quickly locate these documents will make this process much easier and faster.
By using the results provided by aiR for Review to train your Active Learning algorithm, you are making sure the algorithm is trained on the highest-quality documents and that your review will be extra efficient as you combine the results of both tools. By combining the predictions and supporting reasoning supplied by aiR for Review with the prioritised review function of Active Learning, you can be assured of a swift and efficient review that promotes the most 'likely relevant' documents to your reviewers.
Replacing issue coding:
Issue coding goes beyond responsiveness review by tagging documents more specifically based on their content. This is a nuanced and time-consuming process that requires reviewers to read at least the majority of a document to understand its contents and tag it appropriately.
With aiR for Review, you can quickly scan the contents of documents and have the tool apply the tag it thinks best matches the content of each document, based on the instructions you have provided to the algorithm. It is then simply up to the reviewers to agree or disagree with the predictions made by the tool. These predictions, like with the responsiveness review, are supported by excerpts from the document and reasoning as to why the tool has designated a document the way it has.
aiR for Review is still in its wider testing phase and yet is already capable of so much. We cannot wait to see how this tool and the rest of the aiR suite of tools continue to develop and how we can put their capabilities to good use to provide our clients with greater efficiencies and cost-saving measures. If you would like to learn more about aiR for review, you can book a consultation with one of our expert Project Managers and have a friendly chat about how this tool might support you in your next eDiscovery project.