We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Accelerating Production LLMs with Combined Token/Embedding Speculators

Abstract: This technical report describes the design and training of novel speculative decoding draft models, for accelerating the inference speeds of large language models in a production environment. By conditioning draft predictions on both context vectors and sampled tokens, we can train our speculators to efficiently predict high-quality n-grams, which the base model then accepts or rejects. This allows us to effectively predict multiple tokens per inference forward pass, accelerating wall-clock inference speeds of highly optimized base model implementations by a factor of 2-3x. We explore these initial results and describe next steps for further improvements.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2404.19124 [cs.CL]
  (or arXiv:2404.19124v1 [cs.CL] for this version)

Submission history

From: Davis Wertheimer [view email]
[v1] Mon, 29 Apr 2024 21:59:07 GMT (2397kb,D)

Link back to: arXiv, form interface, contact.