Explaining Legal Concepts with Augmented Large Language Models (GPT-4). (arXiv:2306.09525v2 [cs.CL] UPDATED)


Explaining Legal Concepts with Augmented Large Language Models (GPT-4). (arXiv:2306.09525v2 [cs.CL] UPDATED)
By: <a href="http://arxiv.org/find/cs/1/au:+Savelka_J/0/1/0/all/0/1">Jaromir Savelka</a>, <a href="http://arxiv.org/find/cs/1/au:+Ashley_K/0/1/0/all/0/1">Kevin D. Ashley</a>, <a href="http://arxiv.org/find/cs/1/au:+Gray_M/0/1/0/all/0/1">Morgan A. Gray</a>, <a href="http://arxiv.org/find/cs/1/au:+Westermann_H/0/1/0/all/0/1">Hannes Westermann</a>, <a href="http://arxiv.org/find/cs/1/au:+Xu_H/0/1/0/all/0/1">Huihui Xu</a> Posted: June 23, 2023

Interpreting the meaning of legal open-textured terms is a key task of legal
professionals. An important source for this interpretation is how the term was
applied in previous court cases. In this paper, we evaluate the performance of
GPT-4 in generating factually accurate, clear and relevant explanations of
terms in legislation. We compare the performance of a baseline setup, where
GPT-4 is directly asked to explain a legal term, to an augmented approach,
where a legal information retrieval module is used to provide relevant context
to the model, in the form of sentences from case law. We found that the direct
application of GPT-4 yields explanations that appear to be of very high quality
on their surface. However, detailed analysis uncovered limitations in terms of
the factual accuracy of the explanations. Further, we found that the
augmentation leads to improved quality, and appears to eliminate the issue of
hallucination, where models invent incorrect statements. These findings open
the door to the building of systems that can autonomously retrieve relevant
sentences from case law and condense them into a useful explanation for legal
scholars, educators or practicing lawyers alike.

Provided by:
http://arxiv.org/icons/sfx.gif

DoctorMorDi

DoctorMorDi

Moderator and Editor