We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics

Abstract: Transformers have supplanted Recurrent Neural Networks as the dominant architecture for both natural language processing tasks and, despite criticisms of cognitive implausibility, for modelling the effect of predictability on online human language comprehension. However, two recently developed recurrent neural network architectures, RWKV and Mamba, appear to perform natural language tasks comparably to or better than transformers of equivalent scale. In this paper, we show that contemporary recurrent models are now also able to match - and in some cases, exceed - performance of comparably sized transformers at modeling online human language comprehension. This suggests that transformer language models are not uniquely suited to this task, and opens up new directions for debates about the extent to which architectural features of language models make them better or worse models of human language comprehension.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2404.19178 [cs.CL]
  (or arXiv:2404.19178v1 [cs.CL] for this version)

Submission history

From: James Michaelov [view email]
[v1] Tue, 30 Apr 2024 01:02:15 GMT (98kb,D)

Link back to: arXiv, form interface, contact.