We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: The Topos of Transformer Networks

Abstract: The transformer neural network has significantly out-shined all other neural network architectures as the engine behind large language models. We provide a theoretical analysis of the expressivity of the transformer architecture through the lens of topos theory. From this viewpoint, we show that many common neural network architectures, such as the convolutional, recurrent and graph convolutional networks, can be embedded in a pretopos of piecewise-linear functions, but that the transformer necessarily lives in its topos completion. In particular, this suggests that the two network families instantiate different fragments of logic: the former are first order, whereas transformers are higher-order reasoners. Furthermore, we draw parallels with architecture search and gradient descent, integrating our analysis in the framework of cybernetic agents.
Subjects: Machine Learning (cs.LG); Category Theory (math.CT)
Cite as: arXiv:2403.18415 [cs.LG]
  (or arXiv:2403.18415v2 [cs.LG] for this version)

Submission history

From: Mattia Jacopo Villani [view email]
[v1] Wed, 27 Mar 2024 10:06:33 GMT (59kb)
[v2] Wed, 10 Apr 2024 09:24:16 GMT (59kb)

Link back to: arXiv, form interface, contact.