Skip to content

Latest commit

 

History

History
152 lines (99 loc) · 12.3 KB

README.md

File metadata and controls

152 lines (99 loc) · 12.3 KB

Awesome-Motion-Diffusion-Models

Awesome PRs Welcome

We collect existing papers on human motion diffusion models published in prominent conferences and journals.

This paper list will be continuously updated.

Table of Contents

Datasets

Text to Motion

  • Generating Diverse and Natural 3D Human Motions from Text (CVPR 2022) [project] [paper] [code]

  • BABEL: Bodies, Action and Behavior with English Labels (CVPR 2021) [project] [paper] [code]

  • The KIT Motion-Language Dataset (Big Data 2016) [project] [paper]

Audio to Motion

  • AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ (ICCV 2021) [project] [paper] [code]

Survey

  • Human Motion Generation: A Survey [paper]

Papers

2024

ECCV

  • Motion Mamba: Efficient and Long Sequence Motion Generation [project] [paper] [code]
  • BAMM: Bidirectional Autoregressive Motion Model [project] [paper] [code]
  • ParCo: Part-Coordinating Text-to-Motion Synthesis [paper] [code]

TPAMI

  • MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model [project] [paper] [code]

CVPR

  • MoMask: Generative Masked Modeling of 3D Human Motions [project] [paper] [code]
  • MMM: Generative Masked Motion Model [project] [paper] [code]
  • FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models [project] [paper] [code]
  • AAMDM: Accelerated Auto-regressive Motion Diffusion Mode [paper]
  • FlowMDM: Seamless Human Motion Composition with Blended Positional Encodings [project] [paper] [code]
  • OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers [project] [paper] [code]
  • MAS: Multi-view Ancestral Sampling for 3D motion generation using 2D diffusion [project] [paper] [code]
  • AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents [project] [paper] [Video] [code]
  • Scaling Up Dynamic Human-Scene Interaction Modeling [project] [paper] [Demo] [code]
  • Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance [project] [paper] [Video] [code]

IJCV

  • InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions [project] [paper] [code]

ICLR

ICML

  • HumanTOMATO: Text-aligned Whole-body Motion Generation [project] [paper] [code]

Siggraph

  • LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model [paper] [code]
  • Flexible Motion In-betweening with Diffusion Models [project] [paper] [code]

arXiv papers

  • TAAT: Think and Act from Arbitrary Texts in Text to Motion [paper]
  • BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation [project] [paper] [code]
  • GUESS: GradUally Enriching SyntheSis for Text-Driven Human Motion Generation [paper] [code]
  • Off-the-shelf ChatGPT is a Good Few-shot Human Motion Predictor [paper]
  • MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model [project] [paper] [code]
  • ParCo: Part-Coordinating Text-to-Motion Synthesis [paper] [code]
  • MotionFix: Text-Driven 3D Human Motion Editing [paper]

2023

NeurIPS

ICLR

ICCV

  • PhysDiff: Physics-Guided Human Motion Diffusion Model [project] [paper]
  • GMD: Guided Motion Diffusion for Controllable Human Motion Synthesis [project] [paper] [code]
  • ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model [project] [paper] [code]
  • HumanMAC: Masked Motion Completion for Human Motion Prediction [project] [paper] [code]
  • Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model [paper]
  • BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction [project] [paper] [code]
  • Social Diffusion: Long-term Multiple Human Motion Anticipation [paper] [code]

AAAI

  • Human Joint Kinematics Diffusion-Refinement for Stochastic Motion Prediction [paper]

CVPR

  • MLD: Executing your Commands via Motion Diffusion in Latent Space [project] [paper] [code]
  • UDE: A Unified Driving Engine for Human Motion Generation [project] [paper] [code]
  • EDGE: Editable Dance Generation From Music [project] [paper] [code]
  • MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis [project] [paper]
  • T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations [project] [paper] [code]

MMM

  • DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model [project] [paper] [code]

ICASSP

  • Diffusion Motion: Generate Text-Guided 3D Human Motion by Diffusion Model [paper]

TOG

  • Listen, denoise, action! Audio-driven motion synthesis with diffusion models [project] [paper] [code]

arXiv papers

  • DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion [paper] [code]
  • Text2Performer: Text-Driven Human Video Generation [project] [paper] [code]

2022

ECCV

  • MotionCLIP: Exposing Human Motion Generation to CLIP Space [project] [paper] [code]
  • TEMOS: Generating diverse human motions from textual descriptions [project] [paper] [code]

arXiv papers

  • FLAME: Free-form Language-based Motion Synthesis & Editing [project] [paper] [code]

Other Resources

Feel free to contact me if you find any interesting paper is missing.