We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets

Abstract: Large language models (LLMs) have received a lot of attention in natural language processing (NLP) research because of their exceptional performance in understanding and generating human languages. However, low-resource languages are left behind due to the unavailability of resources. In this work, we focus on enhancing the LLaMA-2-Amharic model by integrating task-specific and generative datasets to improve language model performance for Amharic. We compile an Amharic instruction fine-tuning dataset and fine-tuned LLaMA-2-Amharic model. The fine-tuned model shows promising results in different NLP tasks. We open-source our dataset creation pipeline, instruction datasets, trained models, and evaluation outputs to promote language-specific studies on these models.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2402.08015 [cs.CL]
  (or arXiv:2402.08015v5 [cs.CL] for this version)

Submission history

From: Israel Abebe Azime [view email]
[v1] Mon, 12 Feb 2024 19:25:11 GMT (206kb,D)
[v2] Wed, 20 Mar 2024 13:33:19 GMT (1166kb,D)
[v3] Mon, 22 Apr 2024 11:29:23 GMT (3504kb,D)
[v4] Fri, 26 Apr 2024 10:20:04 GMT (3504kb,D)
[v5] Mon, 29 Apr 2024 07:14:51 GMT (3504kb,D)

Link back to: arXiv, form interface, contact.