We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Exploring Internal Numeracy in Language Models: A Case Study on ALBERT

Abstract: It has been found that Transformer-based language models have the ability to perform basic quantitative reasoning. In this paper, we propose a method for studying how these models internally represent numerical data, and use our proposal to analyze the ALBERT family of language models. Specifically, we extract the learned embeddings these models use to represent tokens that correspond to numbers and ordinals, and subject these embeddings to Principal Component Analysis (PCA). PCA results reveal that ALBERT models of different sizes, trained and initialized separately, consistently learn to use the axes of greatest variation to represent the approximate ordering of various numerical concepts. Numerals and their textual counterparts are represented in separate clusters, but increase along the same direction in 2D space. Our findings illustrate that language models, trained purely to model text, can intuit basic mathematical concepts, opening avenues for NLP applications that intersect with quantitative reasoning.
Comments: 4 pages + references, 4 figures. Accepted for publication at the MathNLP Workshop at LREC-COLING 2024
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2404.16574 [cs.CL]
  (or arXiv:2404.16574v1 [cs.CL] for this version)

Submission history

From: Ulme Wennberg [view email]
[v1] Thu, 25 Apr 2024 12:36:19 GMT (156kb,D)

Link back to: arXiv, form interface, contact.