We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Strategic Data Ordering: Enhancing Large Language Model Performance through Curriculum Learning

Abstract: The rapid advancement of Large Language Models (LLMs) has improved text understanding and generation but poses challenges in computational resources. This study proposes a curriculum learning-inspired, data-centric training strategy that begins with simpler tasks and progresses to more complex ones, using criteria such as prompt length, attention scores, and loss values to structure the training data. Experiments with Mistral-7B (Jiang et al., 2023) and Gemma-7B (Team et al., 2024) models demonstrate that curriculum learning slightly improves performance compared to traditional random data shuffling. Notably, we observed that sorting data based on our proposed attention criteria generally led to better performance. This approach offers a sustainable method to enhance LLM performance without increasing model size or dataset volume, addressing scalability challenges in LLM training.
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Report number: DQ-2024-05
Cite as: arXiv:2405.07490 [cs.CL]
  (or arXiv:2405.07490v1 [cs.CL] for this version)

Submission history

From: Jisu Kim [view email]
[v1] Mon, 13 May 2024 06:09:10 GMT (6887kb,D)

Link back to: arXiv, form interface, contact.