SELF-[IN]CORRECT: LLMs Struggle with Discriminating Self-Generated Responses

SELF-[IN]CORRECT: LLMs Struggle with Discriminating Self-Generated Responses
By: Dongwei Jiang, Jingyu Zhang, Orion Weller, Nathaniel Weir, Benjamin Van Durme, Daniel Khashabi Posted: September 6, 2024
arXiv:2404.04298v2 Announce Type: replace
Abstract: Can LLMs consistently improve their previous outputs for better results? For this to be true, LLMs would need to be better at discriminating among previously-generated alternatives, than generating initial responses. We explore the validity of this hypothesis in practice. We first formulate a unified framework that allows us to compare the generative and discriminative capability of any model on any task. In our resulting experimental analysis of several open-source and industrial LLMs, we observe that models are not reliably better at discriminating among previously-generated alternatives than generating initial responses. This finding challenges the notion that LLMs may be able to enhance their performance only through their own judgment.
Provided by:

DoctorMorDi

DoctorMorDi

Moderator and Editor