References & Citations
Computer Science > Computer Vision and Pattern Recognition
Title: MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition
(Submitted on 28 Apr 2024 (v1), last revised 16 May 2024 (this version, v2))
Abstract: This paper presents a novel approach to processing multimodal data for dynamic emotion recognition, named as the Multimodal Masked Autoencoder for Dynamic Emotion Recognition (MultiMAE-DER). The MultiMAE-DER leverages the closely correlated representation information within spatiotemporal sequences across visual and audio modalities. By utilizing a pre-trained masked autoencoder model, the MultiMAEDER is accomplished through simple, straightforward finetuning. The performance of the MultiMAE-DER is enhanced by optimizing six fusion strategies for multimodal input sequences. These strategies address dynamic feature correlations within cross-domain data across spatial, temporal, and spatiotemporal sequences. In comparison to state-of-the-art multimodal supervised learning models for dynamic emotion recognition, MultiMAE-DER enhances the weighted average recall (WAR) by 4.41% on the RAVDESS dataset and by 2.06% on the CREMAD. Furthermore, when compared with the state-of-the-art model of multimodal self-supervised learning, MultiMAE-DER achieves a 1.86% higher WAR on the IEMOCAP dataset.
Submission history
From: Peihao Xiang [view email][v1] Sun, 28 Apr 2024 21:53:42 GMT (909kb,D)
[v2] Thu, 16 May 2024 13:54:39 GMT (1078kb,D)
Link back to: arXiv, form interface, contact.