We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Large Language Models are Inconsistent and Biased Evaluators

Abstract: The zero-shot capability of Large Language Models (LLMs) has enabled highly flexible, reference-free metrics for various tasks, making LLM evaluators common tools in NLP. However, the robustness of these LLM evaluators remains relatively understudied; existing work mainly pursued optimal performance in terms of correlating LLM scores with human expert scores. In this paper, we conduct a series of analyses using the SummEval dataset and confirm that LLMs are biased evaluators as they: (1) exhibit familiarity bias-a preference for text with lower perplexity, (2) show skewed and biased distributions of ratings, and (3) experience anchoring effects for multi-attribute judgments. We also found that LLMs are inconsistent evaluators, showing low "inter-sample" agreement and sensitivity to prompt differences that are insignificant to human understanding of text quality. Furthermore, we share recipes for configuring LLM evaluators to mitigate these limitations. Experimental results on the RoSE dataset demonstrate improvements over the state-of-the-art LLM evaluators.
Comments: 9 pages, 7 figures
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
MSC classes: 68T50 (Primary) 68T01, 68T37, 91F20 (Secondary)
ACM classes: I.2; I.2.7; I.7
Cite as: arXiv:2405.01724 [cs.CL]
  (or arXiv:2405.01724v1 [cs.CL] for this version)

Submission history

From: Rickard Stureborg [view email]
[v1] Thu, 2 May 2024 20:42:28 GMT (7392kb,D)

Link back to: arXiv, form interface, contact.