Authors
- Yuanhong Chen
- Kazuki Shimada
- Christian Simon
- Yukara Ikemiya
- Takashi Shibuya
- Yuki Mitsufuji
Venue
- ACMMM-25
Date
- 2025
CCStereo: Audio-Visual Contextual and Contrastive Learning for Binaural Audio Generation
Yuanhong Chen
Kazuki Shimada
Christian Simon
Yukara Ikemiya
ACMMM-25
2025
Abstract
Binaural audio generation (BAG) aims to convert monaural audio to stereo audio using visual prompts, requiring a deep understanding of spatial and semantic information. The success of the BAG systems depends on the effectiveness of cross-modal reasoning and spatial understanding. Current methods have explored the use of visual information as guidance for binaural audio generation. However, they rely solely on cross-attention mechanisms to guide the generation process and under-utilise the temporal and spatial information in video data during training and inference. These limitations result in the loss of fine-grained spatial details and risk overfitting to specific environments, ultimately constraining model performance. In this paper, we address the aforementioned issues by introducing a new audio-visual binaural generation model incorporating an audio-visual conditional normalisation layer that dynamically aligns the mean and variance of the target difference audio features using visual context, along with a new contrastive learning method to enhance spatial sensitivity by mining negative samples from shuffled visual features. We also introduce a cost-efficient way to utilise test-time augmentation in video data to enhance performance. Our approach achieves state-of-the-art generation accuracy on the FAIR-Play, MUSIC-Stereo, and YT-MUSIC benchmarks. Code will be released.
Related Publications
In music production, manipulating audio effects (Fx) parameters through natural language has the potential to reduce technical barriers for non-experts. We present LLM2Fx, a framework leveraging Large Language Models (LLMs) to predict Fx parameters directly from textual desc…
This paper explores the use of unlearning methods for training data attribution (TDA) in music generative models trained on large-scale datasets. TDA aims to identify which specific training data points contributed to the generation of a particular output from a specific mod…
General-purpose audio representations have proven effective across diverse music information retrieval applications, yet their utility in intelligent music production remains limited by insufficient understanding of audio effects (Fx). Although previous approaches have empha…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.