Authors
- Michele Mancusi
- Yurii Halychanskyi
- Kin Wai Cheuk
- Eloi Moliner
- Chieh-Hsin Lai
- Stefan Uhlich*
- Junghyun Koo*
- Marco A. Martínez-Ramírez
- Wei-Hsiang Liao
- Giorgio Fabbro*
- Yuki Mitsufuji
* External authors
Venue
- ICASSP-25
Date
- 2025
Latent Diffusion Bridges for Unsupervised Musical Audio Timbre Transfer
Michele Mancusi
Yurii Halychanskyi
Kin Wai Cheuk
Eloi Moliner
Stefan Uhlich*
Marco A. Martínez-Ramírez
Giorgio Fabbro*
* External authors
ICASSP-25
2025
Abstract
Music timbre transfer is a challenging task that involves modifying the timbral characteristics of an audio signal while preserving its melodic structure. In this paper, we propose a novel method based on dual diffusion bridges, trained using the CocoChorales Dataset, which consists of unpaired monophonic single-instrument audio data. Each diffusion model is trained on a specific instrument with a Gaussian prior. During inference, a model is designated as the source model to map the input audio to its corresponding Gaussian prior, and another model is designated as the target model to reconstruct the target audio from this Gaussian prior, thereby facilitating timbre transfer. We compare our approach against existing unsupervised timbre transfer models such as VAEGAN and Gaussian Flow Bridges (GFB). Experimental results demonstrate that our method achieves both better Fréchet Audio Distance (FAD) and melody preservation, as reflected by lower pitch distances (DPD) compared to VAEGAN and GFB. Additionally, we discover that the noise level from the Gaussian prior, σ, can be adjusted to control the degree of melody preservation and amount of timbre transferred.
Related Publications
Classifier-Free Guidance (CFG) is a widely used technique for conditional generation and improving sample quality in continuous diffusion models, and recent works have extended it to discrete diffusion. This paper theoretically analyzes CFG in the context of masked discrete …
We present 3DScenePrompt, a framework that generates the next video chunk from arbitrary-length input while enabling precise camera control and preserving scene consistency. Unlike methods conditioned on a single image or a short clip, we employ dual spatio-temporal conditio…
This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.



