We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: L3Cube-MahaNews: News-based Short Text and Long Document Classification Datasets in Marathi

Abstract: The availability of text or topic classification datasets in the low-resource Marathi language is limited, typically consisting of fewer than 4 target labels, with some achieving nearly perfect accuracy. In this work, we introduce L3Cube-MahaNews, a Marathi text classification corpus that focuses on News headlines and articles. This corpus stands out as the largest supervised Marathi Corpus, containing over 1.05L records classified into a diverse range of 12 categories. To accommodate different document lengths, MahaNews comprises three supervised datasets specifically designed for short text, long documents, and medium paragraphs. The consistent labeling across these datasets facilitates document length-based analysis. We provide detailed data statistics and baseline results on these datasets using state-of-the-art pre-trained BERT models. We conduct a comparative analysis between monolingual and multilingual BERT models, including MahaBERT, IndicBERT, and MuRIL. The monolingual MahaBERT model outperforms all others on every dataset. These resources also serve as Marathi topic classification datasets or models and are publicly available at this https URL .
Comments: Accepted at SPELLL 2023
Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG)
DOI: 10.1007/978-3-031-58495-4_4
Cite as: arXiv:2404.18216 [cs.CL]
  (or arXiv:2404.18216v1 [cs.CL] for this version)

Submission history

From: Raviraj Joshi [view email]
[v1] Sun, 28 Apr 2024 15:20:45 GMT (7613kb,D)

Link back to: arXiv, form interface, contact.