We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: A Study on the Calibration of In-context Learning

Abstract: Accurate uncertainty quantification is crucial for the safe deployment of machine learning models, and prior research has demonstrated improvements in the calibration of modern language models (LMs). We study in-context learning (ICL), a prevalent method for adapting static LMs through tailored prompts, and examine the balance between performance and calibration across a broad spectrum of natural language understanding and reasoning tasks. Through comprehensive experiments, we observe that, with an increasing number of ICL examples, models initially exhibit increased miscalibration before achieving better calibration and miscalibration tends to arise in low-shot settings. Moreover, we find that methods aimed at improving usability, such as fine-tuning and chain-of-thought (CoT) prompting, can lead to miscalibration and unreliable natural language explanations. Furthermore, we explore recalibration techniques and find that a scaling-binning calibrator can reduce calibration errors consistently.
Comments: NAACL 2024
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as: arXiv:2312.04021 [cs.CL]
  (or arXiv:2312.04021v4 [cs.CL] for this version)

Submission history

From: Hanlin Zhang [view email]
[v1] Thu, 7 Dec 2023 03:37:39 GMT (205kb,D)
[v2] Mon, 11 Dec 2023 06:05:05 GMT (213kb,D)
[v3] Tue, 2 Jan 2024 05:10:27 GMT (217kb,D)
[v4] Thu, 28 Mar 2024 03:01:45 GMT (223kb,D)

Link back to: arXiv, form interface, contact.