Skip to content

ali-vilab/TeaCache

Repository files navigation

Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model

1University of Chinese Academy of Sciences,  2Alibaba Group
3Institute of Automation, Chinese Academy of Sciences
4Fudan University,  5Nanyang Technological University
(* Work was done during internship at Alibaba Group. † Corresponding author.)
If you like our project, please give us a star ⭐ on GitHub for the latest update.

hf_paper arXiv Home Page License github

visualization

Introduction

We introduce Timestep Embedding Aware Cache (TeaCache), a training-free caching approach that estimates and leverages the fluctuating differences among model outputs across timesteps, thereby accelerating the inference. TeaCache works well for Video Diffusion Models, Image Diffusion models and Audio Diffusion Models. For more details and results, please visit our project page.

Latest News 🔥

Welcome for PRs to support other models. Please star ⭐ our project and stay tuned.

  • [2025/01/07] 🔥 Support TangoFlux. TeaCache works well for Audio Diffusion Models! Rescaling coefficients for FLUX can be directly applied to TangoFLUX.
  • [2024/12/30] 🔥 Support Mochi and LTX-Video for Video Diffusion Models. Support Lumina-T2X for Image Diffusion Models.
  • [2024/12/27] 🔥 Support FLUX. TeaCache works well for Image Diffusion Models!
  • [2024/12/26] 🔥 Support ConsisID. Thanks @SHYuanBest. Rescaling coefficients for CogVideoX can be directly applied to ConsisID.
  • [2024/12/24] 🔥 Support HunyuanVideo.
  • [2024/12/19] 🔥 Support CogVideoX.
  • [2024/12/06] 🎉 Release the code of TeaCache. Support Open-Sora, Open-Sora-Plan and Latte.
  • [2024/11/28] 🎉 Release the paper of TeaCache.

Community Contributions 🧩

If you develop/use TeaCache in your projects, welcome to let us know.

TeaCache for HunyuanVideo

Please refer to TeaCache4HunyuanVideo.

TeaCache for ConsisID

Please refer to TeaCache4ConsisID.

TeaCache for FLUX

Please refer to TeaCache4FLUX.

TeaCache for Mochi

Please refer to TeaCache4Mochi.

TeaCache for LTX-Video

Please refer to TeaCache4LTX-Video.

TeaCache for Lumina-T2X

Please refer to TeaCache4Lumina-T2X.

TeaCache for TangoFlux

Please refer to TeaCache4TangoFlux.

Installation

Prerequisites:

  • Python >= 3.10
  • PyTorch >= 1.13 (We recommend to use a >2.0 version)
  • CUDA >= 11.6

We strongly recommend using Anaconda to create a new environment (Python >= 3.10) to run our examples:

conda create -n teacache python=3.10 -y
conda activate teacache

Install TeaCache:

git clone https://github.com/LiewFeng/TeaCache
cd TeaCache
pip install -e .

Evaluation of TeaCache

We first generate videos according to VBench's prompts.

And then calculate Vbench, PSNR, LPIPS and SSIM based on the video generated.

  1. Generate video
cd eval/teacache
python experiments/latte.py
python experiments/opensora.py
python experiments/open_sora_plan.py
python experiments/cogvideox.py
  1. Calculate Vbench score
# vbench is calculated independently
# get scores for all metrics
python vbench/run_vbench.py --video_path aaa --save_path bbb
# calculate final score
python vbench/cal_vbench.py --score_dir bbb
  1. Calculate other metrics
# these metrics are calculated compared with original model
# gt video is the video of original model
# generated video is our methods's results
python common_metrics/eval.py --gt_video_dir aa --generated_video_dir bb

Acknowledgement

This repository is built based on VideoSys, Diffusers, Open-Sora, Open-Sora-Plan, Latte, CogVideoX, HunyuanVideo, ConsisID, FLUX, Mochi, LTX-Video, Lumina-T2X and TangoFlux. Thanks for their contributions!

License

Citation

If you find TeaCache is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@article{liu2024timestep,
  title={Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model},
  author={Liu, Feng and Zhang, Shiwei and Wang, Xiaofeng and Wei, Yujie and Qiu, Haonan and Zhao, Yuzhong and Zhang, Yingya and Ye, Qixiang and Wan, Fang},
  journal={arXiv preprint arXiv:2411.19108},
  year={2024}
}