Authors

* External authors

Venue

Date

Share

Reverse Engineering of Music Mixing Graphs With Differentiable Processors and Iterative Pruning

Sungho Lee*

Marco A. Martínez-Ramírez

Wei-Hsiang Liao

Stefan Uhlich*

Giorgio Fabbro*

Kyogu Lee*

Yuki Mitsufuji

* External authors

JAES

2025

Abstract

Reverse engineering of music mixes aims to uncover how dry source signals are processed and combined to produce a final mix. In this paper, prior works are extended to reflect the compositional nature of mixing and search for a graph of audio processors. First, a mixing console is constructed, applying all available processors to every track and subgroup. With differentiable processor implementations, their parameters are optimized with gradient descent. Next, the process of removing negligible processors and fine-tuning the remaining ones is repeated. This way, the quality of the full mixing console can be preserved while removing approximately two-thirds of the processors. The proposed method can be used not only to analyze individual music mixes but also to collect large-scale graph data for downstream tasks such as automatic mixing. Especially for the latter purpose, efficient implementation of the search is crucial. To this end, an efficient batch-processing method that computes multiple processors in parallel is presented. Also, the “dry/wet” parameter of each processor is exploited to accelerate the search. Extensive quantitative and qualitative analyses are conducted to evaluate the proposed method’s performance, behavior, and computational cost.

Related Publications

Vid-CamEdit: Video Camera Trajectory Editing with Generative Rendering from Estimated Geometry

AAAI, 2025
Junyoung Seo, Jisang Han, Jaewoo Jung, Siyoon Jin, Joungbin Lee, Takuya Narihira, Kazumi Fukuda, Takashi Shibuya, Donghoon Ahn, Shoukang Hu, Seungryong Kim*, Yuki Mitsufuji

We introduce Vid-CamEdit, a novel framework for video camera trajectory editing, enabling the re-synthesis of monocular videos along user-defined camera paths. This task is challenging due to its ill-posed nature and the limited multi-view video data for training. Traditiona…

SteerMusic: Enhanced Musical Consistency for Zero-shot Text-Guided and Personalized Music Editing

AAAI, 2025
Xinlei Niu, Kin Wai Cheuk, Jing Zhang, Naoki Murata, Chieh-Hsin Lai, Michele Mancusi, Woosung Choi, Giorgio Fabbro*, Wei-Hsiang Liao, Charles Patrick Martin, Yuki Mitsufuji

Music editing is an important step in music production, which has broad applications, including game development and film production. Most existing zero-shot text-guided methods rely on pretrained diffusion models by involving forward-backward diffusion processes for editing…

Music Arena: Live Evaluation for Text-to-Music

NeurIPS, 2025
Yonghyun Kim, Wayne Chi, Anastasios N. Angelopoulos, Wei-Lin Chiang, Koichi Saito, Shinji Watanabe, Yuki Mitsufuji, Chris Donahue

We present Music Arena, an open platform for scalable human preference evaluation of text-to-music (TTM) models. Soliciting human preferences via listening studies is the gold standard for evaluation in TTM, but these studies are expensive to conduct and difficult to compare…

  • HOME
  • Publications
  • Reverse Engineering of Music Mixing Graphs With Differentiable Processors and Iterative Pruning

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.