We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

eess.AS

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Electrical Engineering and Systems Science > Audio and Speech Processing

Title: Experimenting with Additive Margins for Contrastive Self-Supervised Speaker Verification

Abstract: Most state-of-the-art self-supervised speaker verification systems rely on a contrastive-based objective function to learn speaker representations from unlabeled speech data. We explore different ways to improve the performance of these methods by: (1) revisiting how positive and negative pairs are sampled through a "symmetric" formulation of the contrastive loss; (2) introducing margins similar to AM-Softmax and AAM-Softmax that have been widely adopted in the supervised setting. We demonstrate the effectiveness of the symmetric contrastive loss which provides more supervision for the self-supervised task. Moreover, we show that Additive Margin and Additive Angular Margin allow reducing the overall number of false negatives and false positives by improving speaker separability. Finally, by combining both techniques and training a larger model we achieve 7.50% EER and 0.5804 minDCF on the VoxCeleb1 test set, which outperforms other contrastive self supervised methods on speaker verification.
Comments: accepted at INTERSPEECH 2023, 20th-24th August 2023, Dublin, Ireland
Subjects: Audio and Speech Processing (eess.AS); Machine Learning (cs.LG)
Journal reference: Proc. INTERSPEECH 2023
DOI: 10.21437/Interspeech.2023-1479
Cite as: arXiv:2306.03664 [eess.AS]
  (or arXiv:2306.03664v1 [eess.AS] for this version)

Submission history

From: Reda Dehak [view email]
[v1] Tue, 6 Jun 2023 13:26:43 GMT (252kb,D)

Link back to: arXiv, form interface, contact.