References & Citations
Computer Science > Computation and Language
Title: Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets
(Submitted on 12 Feb 2024 (v1), last revised 29 Apr 2024 (this version, v5))
Abstract: Large language models (LLMs) have received a lot of attention in natural language processing (NLP) research because of their exceptional performance in understanding and generating human languages. However, low-resource languages are left behind due to the unavailability of resources. In this work, we focus on enhancing the LLaMA-2-Amharic model by integrating task-specific and generative datasets to improve language model performance for Amharic. We compile an Amharic instruction fine-tuning dataset and fine-tuned LLaMA-2-Amharic model. The fine-tuned model shows promising results in different NLP tasks. We open-source our dataset creation pipeline, instruction datasets, trained models, and evaluation outputs to promote language-specific studies on these models.
Submission history
From: Israel Abebe Azime [view email][v1] Mon, 12 Feb 2024 19:25:11 GMT (206kb,D)
[v2] Wed, 20 Mar 2024 13:33:19 GMT (1166kb,D)
[v3] Mon, 22 Apr 2024 11:29:23 GMT (3504kb,D)
[v4] Fri, 26 Apr 2024 10:20:04 GMT (3504kb,D)
[v5] Mon, 29 Apr 2024 07:14:51 GMT (3504kb,D)
Link back to: arXiv, form interface, contact.