Training-free Diffusion Acceleration
with Bottleneck Sampling

1Peking University, 2ByteDance
*Equal Contribution, Project Leader, Corresponding Authors

Bottleneck-Sampling a training-free framework that leverages low-resolution priors to reduce computational overhead while preserving output fidelity. It maintains comparable performance with a 2.5 - 3 × acceleration ratio in a training-free manner.

Abstract

Diffusion models have demonstrated remarkable capabilities in visual content generation but remain challenging to deploy due to their high computational cost during inference. This computational burden primarily arises from the quadratic complexity of self-attention with respect to image or video resolution. While existing acceleration methods often compromise output quality or necessitate costly retraining, we observe that most diffusion models are pre-trained at lower resolutions, presenting an opportunity to exploit these low-resolution priors for more efficient inference without degrading performance. In this work, we introduce Bottleneck Sampling, a training-free framework that leverages low-resolution priors to reduce computational overhead while preserving output fidelity. Bottleneck Sampling follows a high-low-high denoising workflow: it performs high-resolution denoising in the initial and final stages while operating at lower resolutions in intermediate steps. To mitigate aliasing and blurring artifacts, we further refine the resolution transition points and adaptively shift the denoising timesteps at each stage. We evaluate Bottleneck Sampling on both image and video generation tasks, where extensive experiments demonstrate that it accelerates inference by up to 3 × for image generation and 2.5 × for video generation, all while maintaining output quality comparable to the standard full-resolution sampling process across multiple evaluation metrics.

Method

Overall pipeline of our Bottleneck Sampling. The process consists of three stages: (i) High-Resolution Denoising to preserve semantic information, (ii) Low-Resolution Denoising to improve efficiency, and (iii) High-Resolution Denoising to restore fine details. Images generated by FLUX.1-dev using the prompt: "2D cartoon,Diagonal composition, Medium close-up, a whole body of a classical doll being held by a hand, the doll of a young boy with white hair dressed in purple, He has pale skin and white eyes."

Results on Image Generation

Qualitative comparison of our Bottleneck Sampling with FLUX.1-dev. Our method achieves up tp 3× speedup while maintaining or improving visual fidelity. Incorrect text rendering and anatomical inconsistencies are highlighted with different colors.

Results on Video Generation

HunyuanVideo
HunyuanVideo 40% steps
Bottleneck Sampling
A cat is walking on the grass. At dusk, an elderly couple holding hands stroll along the beach, with the waves gently lapping at the shore and the afterglow of the setting sun shining on them. The entire scene creates a tranquil atmosphere.
HunyuanVideo
HunyuanVideo 40% steps
Bottleneck Sampling
A wizard is waving a magic wand, chanting spells, controlling an apple to fly into the air. The color of the apple changes from green to red. There is a mirror on the pink dressing table, and below the mirror, an ant is slowly crawling on the desktop.

BibTeX

@article{tian2025bottlenecksampling,
      title={Training-free Diffusion Acceleration with Bottleneck Sampling},
      author={Tian, Ye and Xia, Xin and Ren, Yuxi and Lin, Shanchuan and Wang, Xing and Xiao, Xuefeng and Tong, Yunhai and Yang, Ling and Cui, Bin},
      journal={arXiv preprint arXiv:2503.18940},
      year={2025}
}