We gratefully acknowledge support from
the Simons Foundation and member institutions.

Sound

New submissions

[ total of 4 entries: 1-4 ]
[ showing up to 2000 entries per page: fewer | more ]

New submissions for Fri, 10 May 24

[1]  arXiv:2405.05467 [pdf, other]
Title: AFEN: Respiratory Disease Classification using Ensemble Learning
Comments: Under Review Process for MLForHC 2024
Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Audio and Speech Processing (eess.AS)

We present AFEN (Audio Feature Ensemble Learning), a model that leverages Convolutional Neural Networks (CNN) and XGBoost in an ensemble learning fashion to perform state-of-the-art audio classification for a range of respiratory diseases. We use a meticulously selected mix of audio features which provide the salient attributes of the data and allow for accurate classification. The extracted features are then used as an input to two separate model classifiers 1) a multi-feature CNN classifier and 2) an XGBoost Classifier. The outputs of the two models are then fused with the use of soft voting. Thus, by exploiting ensemble learning, we achieve increased robustness and accuracy. We evaluate the performance of the model on a database of 920 respiratory sounds, which undergoes data augmentation techniques to increase the diversity of the data and generalizability of the model. We empirically verify that AFEN sets a new state-of-the-art using Precision and Recall as metrics, while decreasing training time by 60%.

[2]  arXiv:2405.05498 [pdf, other]
Title: The RoyalFlush Automatic Speech Diarization and Recognition System for In-Car Multi-Channel Automatic Speech Recognition Challenge
Subjects: Sound (cs.SD); Audio and Speech Processing (eess.AS)

This paper presents our system submission for the In-Car Multi-Channel Automatic Speech Recognition (ICMC-ASR) Challenge, which focuses on speaker diarization and speech recognition in complex multi-speaker scenarios. To address these challenges, we develop end-to-end speaker diarization models that notably decrease the diarization error rate (DER) by 49.58\% compared to the official baseline on the development set. For speech recognition, we utilize self-supervised learning representations to train end-to-end ASR models. By integrating these models, we achieve a character error rate (CER) of 16.93\% on the track 1 evaluation set, and a concatenated minimum permutation character error rate (cpCER) of 25.88\% on the track 2 evaluation set.

Replacements for Fri, 10 May 24

[3]  arXiv:2309.10674 (replaced) [pdf, other]
Title: USED: Universal Speaker Extraction and Diarization
Subjects: Sound (cs.SD); Audio and Speech Processing (eess.AS)
[4]  arXiv:2405.02991 (replaced) [pdf, other]
Title: Steered Response Power for Sound Source Localization: A Tutorial Review
Subjects: Sound (cs.SD); Audio and Speech Processing (eess.AS)
[ total of 4 entries: 1-4 ]
[ showing up to 2000 entries per page: fewer | more ]

Disable MathJax (What is MathJax?)

Links to: arXiv, form interface, find, cs, recent, 2405, contact, help  (Access key information)