We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Fine-tuning the SwissBERT Encoder Model for Embedding Sentences and Documents

Abstract: Encoder models trained for the embedding of sentences or short documents have proven useful for tasks such as semantic search and topic modeling. In this paper, we present a version of the SwissBERT encoder model that we specifically fine-tuned for this purpose. SwissBERT contains language adapters for the four national languages of Switzerland -- German, French, Italian, and Romansh -- and has been pre-trained on a large number of news articles in those languages. Using contrastive learning based on a subset of these articles, we trained a fine-tuned version, which we call SentenceSwissBERT. Multilingual experiments on document retrieval and text classification in a Switzerland-specific setting show that SentenceSwissBERT surpasses the accuracy of the original SwissBERT model and of a comparable baseline. The model is openly available for research use.
Comments: SwissText 2024
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2405.07513 [cs.CL]
  (or arXiv:2405.07513v1 [cs.CL] for this version)

Submission history

From: Jannis Vamvas [view email]
[v1] Mon, 13 May 2024 07:20:21 GMT (133kb,D)

Link back to: arXiv, form interface, contact.