We gratefully acknowledge support from
the Simons Foundation and member institutions.

Multimedia

New submissions

[ total of 7 entries: 1-7 ]
[ showing up to 1000 entries per page: fewer | more ]

New submissions for Mon, 10 Jun 24

[1]  arXiv:2406.04632 [pdf, other]
Title: StreamOptix: A Cross-layer Adaptive Video Delivery Scheme
Comments: under review in Transactions on Multimedia (TMM)
Subjects: Multimedia (cs.MM)

This paper presents a cross-layer video delivery scheme, StreamOptix, and proposes a joint optimization algorithm for video delivery that leverages the characteristics of the physical (PHY), medium access control (MAC), and application (APP) layers. Most existing methods for optimizing video transmission over different layers were developed individually. Realizing a cross-layer design has always been a significant challenge, mainly due to the complex interactions and mismatches in timescales between layers, as well as the presence of distinct objectives in different layers. To address these complications, we take a divide-and-conquer approach and break down the formulated cross-layer optimization problem for video delivery into three sub-problems. We then propose a three-stage closedloop optimization framework, which consists of 1) an adaptive bitrate (ABR) strategy based on the link capacity information from PHY, 2) a video-aware resource allocation scheme accounting for the APP bitrate constraint, and 3) a link adaptation technique utilizing the soft acknowledgment feedback (soft-ACK). The proposed framework also supports the collections of the distorted bitstreams transmitted across the link. This allows a more reasonable assessment of video quality compared to many existing ABR methods that simply neglect the distortions occurring in the PHY layer. Experiments conducted under various network settings demonstrate the effectiveness and superiority of the new cross-layer optimization strategy. A byproduct of this study is the development of more comprehensive performance metrics on video delivery, which lays down the foundation for extending our system to multimodal communications in the future. Code for reproducing the experimental results is available at https://github.com/Evan-sudo/StreamOptix.

Cross-lists for Mon, 10 Jun 24

[2]  arXiv:2406.04429 (cross-list from eess.AS) [pdf, other]
Title: InaGVAD : a Challenging French TV and Radio Corpus Annotated for Speech Activity Detection and Speaker Gender Segmentation
Comments: Voice Activity Detection (VAD), Speaker Gender Segmentation, Audiovisual Speech Resource, Speaker Traits, Speech Overlap, Benchmark, X-vector, Gender Representation in the Media, Dataset
Journal-ref: Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 8963-8974, Torino, Italia. ELRA and ICCL
Subjects: Audio and Speech Processing (eess.AS); Digital Libraries (cs.DL); Multimedia (cs.MM); Sound (cs.SD)

InaGVAD is an audio corpus collected from 10 French radio and 18 TV channels categorized into 4 groups: generalist radio, music radio, news TV, and generalist TV. It contains 277 1-minute-long annotated recordings aimed at representing the acoustic diversity of French audiovisual programs and was primarily designed to build systems able to monitor men's and women's speaking time in media. inaGVAD is provided with Voice Activity Detection (VAD) and Speaker Gender Segmentation (SGS) annotations extended with overlap, speaker traits (gender, age, voice quality), and 10 non-speech event categories. Annotation distributions are detailed for each channel category. This dataset is partitioned into a 1h development and a 3h37 test subset, allowing fair and reproducible system evaluation. A benchmark of 6 freely available VAD software is presented, showing diverse abilities based on channel and non-speech event categories. Two existing SGS systems are evaluated on the corpus and compared against a baseline X-vector transfer learning strategy, trained on the development subset. Results demonstrate that our proposal, trained on a single - but diverse - hour of data, achieved competitive SGS results. The entire inaGVAD package; including corpus, annotations, evaluation scripts, and baseline training code; is made freely accessible, fostering future advancement in the domain.

[3]  arXiv:2406.04629 (cross-list from cs.CV) [pdf, other]
Title: STAR: Skeleton-aware Text-based 4D Avatar Generation with In-Network Motion Retargeting
Comments: Tech report
Subjects: Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Multimedia (cs.MM)

The creation of 4D avatars (i.e., animated 3D avatars) from text description typically uses text-to-image (T2I) diffusion models to synthesize 3D avatars in the canonical space and subsequently applies animation with target motions. However, such an optimization-by-animation paradigm has several drawbacks. (1) For pose-agnostic optimization, the rendered images in canonical pose for naive Score Distillation Sampling (SDS) exhibit domain gap and cannot preserve view-consistency using only T2I priors, and (2) For post hoc animation, simply applying the source motions to target 3D avatars yields translation artifacts and misalignment. To address these issues, we propose Skeleton-aware Text-based 4D Avatar generation with in-network motion Retargeting (STAR). STAR considers the geometry and skeleton differences between the template mesh and target avatar, and corrects the mismatched source motion by resorting to the pretrained motion retargeting techniques. With the informatively retargeted and occlusion-aware skeleton, we embrace the skeleton-conditioned T2I and text-to-video (T2V) priors, and propose a hybrid SDS module to coherently provide multi-view and frame-consistent supervision signals. Hence, STAR can progressively optimize the geometry, texture, and motion in an end-to-end manner. The quantitative and qualitative experiments demonstrate our proposed STAR can synthesize high-quality 4D avatars with vivid animations that align well with the text description. Additional ablation studies shows the contributions of each component in STAR. The source code and demos are available at: \href{https://star-avatar.github.io}{https://star-avatar.github.io}.

[4]  arXiv:2406.04673 (cross-list from cs.CV) [pdf, other]
Title: MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models
Comments: Accepted at CVPR 2024 as Highlight paper. Webpage: this https URL
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Multimedia (cs.MM); Audio and Speech Processing (eess.AS)

Music is a universal language that can communicate emotions and feelings. It forms an essential part of the whole spectrum of creative media, ranging from movies to social media posts. Machine learning models that can synthesize music are predominantly conditioned on textual descriptions of it. Inspired by how musicians compose music not just from a movie script, but also through visualizations, we propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music. MeLFusion is a text-to-music diffusion model with a novel "visual synapse", which effectively infuses the semantics from the visual modality into the generated music. To facilitate research in this area, we introduce a new dataset MeLBench, and propose a new evaluation metric IMSM. Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music, measured both objectively and subjectively, with a relative gain of up to 67.98% on the FAD score. We hope that our work will gather attention to this pragmatic, yet relatively under-explored research area.

[5]  arXiv:2406.04765 (cross-list from cs.CV) [pdf, other]
Title: SMC++: Masked Learning of Unsupervised Video Semantic Compression
Subjects: Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)

Most video compression methods focus on human visual perception, neglecting semantic preservation. This leads to severe semantic loss during the compression, hampering downstream video analysis tasks. In this paper, we propose a Masked Video Modeling (MVM)-powered compression framework that particularly preserves video semantics, by jointly mining and compressing the semantics in a self-supervised manner. While MVM is proficient at learning generalizable semantics through the masked patch prediction task, it may also encode non-semantic information like trivial textural details, wasting bitcost and bringing semantic noises. To suppress this, we explicitly regularize the non-semantic entropy of the compressed video in the MVM token space. The proposed framework is instantiated as a simple Semantic-Mining-then-Compression (SMC) model. Furthermore, we extend SMC as an advanced SMC++ model from several aspects. First, we equip it with a masked motion prediction objective, leading to better temporal semantic learning ability. Second, we introduce a Transformer-based compression module, to improve the semantic compression efficacy. Considering that directly mining the complex redundancy among heterogeneous features in different coding stages is non-trivial, we introduce a compact blueprint semantic representation to align these features into a similar form, fully unleashing the power of the Transformer-based compression module. Extensive results demonstrate the proposed SMC and SMC++ models show remarkable superiority over previous traditional, learnable, and perceptual quality-oriented video codecs, on three video analysis tasks and seven datasets. \textit{Codes and model are available at: \url{https://github.com/tianyuan168326/VideoSemanticCompression-Pytorch}.

[6]  arXiv:2406.04930 (cross-list from cs.CV) [pdf, other]
Title: MA-AVT: Modality Alignment for Parameter-Efficient Audio-Visual Transformers
Comments: Accepted in Efficient Deep Learning for Computer Vision CVPR Workshop 2024
Subjects: Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Sound (cs.SD); Audio and Speech Processing (eess.AS)

Recent advances in pre-trained vision transformers have shown promise in parameter-efficient audio-visual learning without audio pre-training. However, few studies have investigated effective methods for aligning multimodal features in parameter-efficient audio-visual transformers. In this paper, we propose MA-AVT, a new parameter-efficient audio-visual transformer employing deep modality alignment for corresponding multimodal semantic features. Specifically, we introduce joint unimodal and multimodal token learning for aligning the two modalities with a frozen modality-shared transformer. This allows the model to learn separate representations for each modality, while also attending to the cross-modal relationships between them. In addition, unlike prior work that only aligns coarse features from the output of unimodal encoders, we introduce blockwise contrastive learning to align coarse-to-fine-grain hierarchical features throughout the encoding phase. Furthermore, to suppress the background features in each modality from foreground matched audio-visual features, we introduce a robust discriminative foreground mining scheme. Through extensive experiments on benchmark AVE, VGGSound, and CREMA-D datasets, we achieve considerable performance improvements over SOTA methods.

Replacements for Mon, 10 Jun 24

[7]  arXiv:2405.11093 (replaced) [pdf, other]
Title: AudioSetMix: Enhancing Audio-Language Datasets with LLM-Assisted Augmentations
Authors: David Xu
Comments: typos corrected
Subjects: Audio and Speech Processing (eess.AS); Computation and Language (cs.CL); Multimedia (cs.MM); Sound (cs.SD)
[ total of 7 entries: 1-7 ]
[ showing up to 1000 entries per page: fewer | more ]

Disable MathJax (What is MathJax?)

Links to: arXiv, form interface, find, cs, recent, 2406, contact, help  (Access key information)