We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: A Study on the Calibration of In-context Learning

Abstract: Modern auto-regressive language models are trained to minimize log loss on broad data by predicting the next token so they are expected to get calibrated answers in next-token prediction tasks. We study this for in-context learning (ICL), a widely used way to adapt frozen large language models (LLMs) via crafting prompts, and investigate the trade-offs between performance and calibration on a wide range of natural language understanding and reasoning tasks. We conduct extensive experiments to show that such trade-offs may get worse as we increase model size, incorporate more ICL examples, and fine-tune models using instruction, dialog, or reinforcement learning from human feedback (RLHF) on carefully curated datasets. Furthermore, we find that common recalibration techniques that are widely effective such as temperature scaling provide limited gains in calibration errors, suggesting that new methods may be required for settings where models are expected to be reliable.
Comments: Spotlight Talk at NeurIPS 2023 Workshop on Failure Modes in the Age of Foundation Models
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as: arXiv:2312.04021 [cs.CL]
  (or arXiv:2312.04021v2 [cs.CL] for this version)

Submission history

From: Hanlin Zhang [view email]
[v1] Thu, 7 Dec 2023 03:37:39 GMT (205kb,D)
[v2] Mon, 11 Dec 2023 06:05:05 GMT (213kb,D)
[v3] Tue, 2 Jan 2024 05:10:27 GMT (217kb,D)
[v4] Thu, 28 Mar 2024 03:01:45 GMT (223kb,D)

Link back to: arXiv, form interface, contact.