paper code and model supplementary material Trajectory100 testset

* The videos are compressed for quick loading, at the expense of dropped quality.


🤗 Please scroll down ⬇️ or right ➡️ to view over 100 example videos!!! 🤗
🤗 Thank you for waiting paiently while loading the videos. 🤗







Abstract

Generating human videos with realistic and controllable motions is a challenging task. While existing methods can generate visually compelling videos, they lack separate control over four key video elements: foreground subject, background video, human trajectory and action patterns. In this paper, we propose a decomposed human motion control and video generation framework that explicitly decouples motion from appearance, subject from background, and action from trajectory, enabling flexible mix-and-match composition of these elements. Concretely, we first build a ground-aware 3D world coordinate system and perform motion editing directly in the 3D space. Trajectory control is implemented by unprojecting edited 2D trajectories into 3D with focal-length calibration and coordinate transformation, followed by speed alignment and orientation adjustment; actions are supplied by a motion bank or generated via text-to-motion methods. Then, based on modern text-to-video diffusion transformer models, we inject the subject as tokens for full attention, concatenate the background along the channel dimension, and add motion (trajectory and action) control signals by addition. Such a design opens up the possibility for us to generate realistic videos of anyone doing anything anywhere. Extensive experiments on benchmark datasets and real-world cases demonstrate that our method achieves state-of-the-art performance on both element-wise controllability and overall video quality.


Method Overview

RealisMotion has two stages:

  1. we first build a ground-aware 3D world coordinate system for the human motion, and conduct trajectory and action editing separately within the 3D space.

  2. we then generate human videos conditional on the foreground subject image, background video and rendered motion guidance videos.



Visual Comparison on Trajectory and Global Orientation Control

Trajectory and Orientation
Wan-2.1-I2V
Tora
RealisDance-DiT
RealisMotion (ours)
Ground-Truth


Visual Comparison on Action Control

Action
AnimateX
ControlNexT
MimicMotion
Moore AnimateAnymore
MusePose
RealisDance-Dit
RealisMotion (ours)
Ground-Truth


Visual Results of Subject Control.

Subject
RealisMotion (ours)


 

Bibtex

    

@article{liang2025realismotion, title={RealisMotion: Decomposed Human Motion Control and Video Generation in the World Space}, author={Liang, Jingyun and Zhou, Jingkai and Li, Shikai and Cao, Chenjie and Sun, Lei and Qian, Yichen and Chen, Weihua and Wang, Fan}, journal={arXiv preprint arXiv:2508.08588}, year={2025} }