Hallucinating about Hallucinations: Stanford's Study on Legal AI Tools
A recent Stanford study was published, and has the industry up in arms. This is a preliminary research report, and there will be additional tools reviewed soon according to the university.
Why does this study matter? Legal focused AI tools may not be significantly more accurate than standard AI. Clear Guidance has been engaged by several law firms for AI in the last twelve months, and every engagement had one common goal: minimize the risk and damage to the firm posed by AI tools.
The results of the study may not be the best, or the most thorough, but they do illustrate the risks of AI. Specific examples around dissent in Supreme Court cases were inaccurately answered (i.e. hallucinations) even though it was in a lower tier tool. The assumption by most legal professionals is that a legal focused tool will return accurate answers in the majority of cases.
There is takeaway about any specific AI tools, except to proceed with caution. AI tools continue to evolve every day, which can bring new issues and challenges. With the professional obligations in the legal field, firms should proceed cautiously and make sure their attorneys and staff understand the risks posed by AI tools.
Did you know? Clear Guidance assists firms with building policies for AI usage, with a focus on minimizing risk. Contact us today on minimizing risk to your firm.