We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.IR

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Information Retrieval

Title: Distillation for Multilingual Information Retrieval

Abstract: Recent work in cross-language information retrieval (CLIR), where queries and documents are in different languages, has shown the benefit of the Translate-Distill framework that trains a cross-language neural dual-encoder model using translation and distillation. However, Translate-Distill only supports a single document language. Multilingual information retrieval (MLIR), which ranks a multilingual document collection, is harder to train than CLIR because the model must assign comparable relevance scores to documents in different languages. This work extends Translate-Distill and propose Multilingual Translate-Distill (MTD) for MLIR. We show that ColBERT-X models trained with MTD outperform their counterparts trained ith Multilingual Translate-Train, which is the previous state-of-the-art training approach, by 5% to 25% in nDCG@20 and 15% to 45% in MAP. We also show that the model is robust to the way languages are mixed in training batches. Our implementation is available on GitHub.
Comments: 6 pages, 1 figure, accepted at SIGIR 2024 as short paper
Subjects: Information Retrieval (cs.IR); Computation and Language (cs.CL)
DOI: 10.1145/3626772.3657955
Cite as: arXiv:2405.00977 [cs.IR]
  (or arXiv:2405.00977v1 [cs.IR] for this version)

Submission history

From: Eugene Yang [view email]
[v1] Thu, 2 May 2024 03:30:03 GMT (108kb,D)

Link back to: arXiv, form interface, contact.