IntMeanFlow:

Few-step Speech Generation with Integral Velocity Distillation

[Paper]


Wei Wang, Rong Cao, Yi Guo, Zhengyang Chen, Kuan Chen, Yuanyuan Huo

ByteDance

Abstract. Flow-based generative models have greatly improved text-to-speech (TTS) synthesis quality, but inference speed remains limited by the iterative sampling process and multiple function evaluations (NFE). The recent MeanFlow model accelerates generation by modeling average velocity instead of instantaneous velocity. However, its direct application to TTS encounters challenges, including GPU memory overhead from Jacobian-vector products (JVP) and training instability due to self-bootstrap processes. To address these issues, we introduce IntMeanFlow, a framework for few-step speech generation with integral velocity distillation. By approximating average velocity with the teacher’s instantaneous velocity over a temporal interval, IntMeanFlow eliminates the need for JVPs and self-bootstrap, improving stability and reducing GPU memory usage. We also propose the Optimal Step Sampling Search (O3S) algorithm, which identifies the model-specific optimal sampling steps, improving speech synthesis without additional inference overhead. Experiments show that IntMeanFlow achieves 1-NFE inference for token-to-spectrogram and 3-NFE for text-to-spectrogram tasks while maintaining high-quality synthesis.

Contents

Method Overview

Figure 1. Conceptual Comparison of Flow Matching and IntMeanFlow

IntMeanFlow x F5-TTS

Language Prompt F5 Base (NFE=32) F5 Base x IMF (NFE=3) F5 Base x IMF (NFE=2) F5 Small x IMF (NFE=3)
EN
ZH N/A

IntMeanFlow x CosyVoice2

Prompt CosyVoice2 (NFE=32) CosyVoice2 x IMF (NFE=1) CosyVoice2 x MF (NFE=1)