We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Towards smaller, faster decoder-only transformers: Architectural variants and their implications

Abstract: Research on Large Language Models (LLMs) has recently seen exponential growth, largely focused on transformer-based architectures, as introduced by [1] and further advanced by the decoder-only variations in [2]. Contemporary studies typically aim to improve model capabilities by increasing both the architecture's complexity and the volume of training data. However, research exploring how to reduce model sizes while maintaining performance is limited. This study introduces three modifications to the decoder-only transformer architecture: ParallelGPT (p-gpt), LinearlyCompressedGPT (lc-gpt), and ConvCompressedGPT (cc-gpt). These variants achieve comparable performance to conventional architectures in code generation tasks while benefiting from reduced model sizes and faster training times. We open-source the model weights and codebase to support future research and development in this domain.
Comments: 8 pages, 6 figures
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2404.14462 [cs.LG]
  (or arXiv:2404.14462v2 [cs.LG] for this version)

Submission history

From: Sathya Krishnan Suresh [view email]
[v1] Mon, 22 Apr 2024 06:19:46 GMT (832kb,D)
[v2] Wed, 24 Apr 2024 03:52:49 GMT (833kb,D)

Link back to: arXiv, form interface, contact.