Authors

* External authors

Venue

Date

Share

Manifold Preserving Guided Diffusion

Yutong He

Naoki Murata

Chieh-Hsin Lai

Yuhta Takida

Toshimitsu Uesaka

Dongjun Kim

Wei-Hsiang Liao

Yuki Mitsufuji

J. Zico Kolter*

Ruslan Salakhutdinov*

Stefano Ermon*

* External authors

ICLR 2024

2024

Abstract

Despite the recent advancements, conditional image generation still faces challenges of cost, generalizability, and the need for task-specific training. In this paper, we propose Manifold Preserving Guided Diffusion (MPGD), a training-free conditional generation framework that leverages pretrained diffusion models and off-the-shelf neural networks with minimal additional inference cost for a broad range of tasks. Specifically, we leverage the manifold hypothesis to refine the guided diffusion steps and introduce a shortcut algorithm in the process. We then propose two methods for on-manifold training-free guidance using pre-trained autoencoders and demonstrate that our shortcut inherently preserves the manifolds when applied to latent diffusion models. Our experiments show that MPGD is efficient and effective for solving a variety of conditional generation applications in low-compute settings, and can consistently offer up to 3.8x speed-ups with the same number of diffusion steps while maintaining high sample quality compared to the baselines.

Related Publications

On the Language Encoder of Contrastive Cross-modal Models

ACL, 2024
Mengjie Zhao*, Junya Ono*, Zhi Zhong*, Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Takashi Shibuya, Hiromi Wakaki*, Yuki Mitsufuji, Wei-Hsiang Liao

Contrastive cross-modal models such as CLIP and CLAP aid various vision-language (VL) and audio-language (AL) tasks. However, there has been limited investigation of and improvement in their language encoder, which is the central component of encoding natural language descri…

DiffuCOMET: Contextual Commonsense Knowledge Diffusion

ACL, 2024
Silin Gao*, Mete Ismayilzada*, Mengjie Zhao*, Hiromi Wakaki*, Yuki Mitsufuji, Antoine Bosselut*

Inferring contextually-relevant and diverse commonsense to understand narratives remains challenging for knowledge models. In this work, we develop a series of knowledge models, DiffuCOMET, that leverage diffusion to learn to reconstruct the implicit semantic connections bet…

SpecMaskGIT: Masked Generative Modeling of Audio Spectrograms for Efficient Audio Synthesis and Beyond

ISMIR, 2024
Marco Comunità*, Zhi Zhong*, Akira Takahashi, Shiqi Yang*, Mengjie Zhao*, Koichi Saito, Yukara Ikemiya, Takashi Shibuya, Shusuke Takahashi*, Yuki Mitsufuji

Recent advances in generative models that iteratively synthesize audio clips sparked great success to text-to-audio synthesis (TTA), but with the cost of slow synthesis speed and heavy computation. Although there have been attempts to accelerate the iterative procedure, high…

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.