paper supplementary material
  • A dog is chasing a ball on a sofa

    Text Prompt

    Generated Image-to-Video Optical Flows

    Generated Video-to-Image Optical Flows

    Calculated Occlusion Maps

    Generated Depth Maps

    Generated Video

  • a car is moving on the snow

    Text Prompt

    Generated Video-to-Image Optical Flows

    Generated Video-to-Image Optical Flows

    Calculated Occlusion Maps

    Generated Depth Maps

    Generated Video

  • a fox flying over the rainbow in the sky

    Text Prompt

    Generated Video-to-Image Optical Flows

    Generated Video-to-Image Optical Flows

    Calculated Occlusion Maps

    Generated Depth Maps

    Generated Video

  • lightning and storm

    Text Prompt

    Generated Video-to-Image Optical Flows

    Generated Video-to-Image Optical Flows

    Calculated Occlusion Maps

    Generated Depth Maps

    Generated Video

  • a man in front of a waterfall

    Text Prompt

    Generated Video-to-Image Optical Flows

    Generated Video-to-Image Optical Flows

    Calculated Occlusion Maps

    Generated Depth Maps

    Generated Video


Abstract

While recent years have witnessed great progress on using diffusion models for video generation, most of them are simple extensions of image generation frameworks, which fail to explicitly consider one of the key differences between videos and images, i.e., motion. In this paper, we propose a novel motion-aware video generation (MoVideo) framework that takes motion into consideration from two aspects: video depth and optical flow. The former regulates motion by per-frame object distances and spatial layouts, while the later describes motion by cross-frame correspondences that help in preserving fine details and improving temporal consistency. More specifically, given a key frame that exists or generated from text prompts, we first design a diffusion model with spatio-temporal modules to generate the video depth and the corresponding optical flows. Then, the video is generated in the latent space by another spatio-temporal diffusion model under the guidance of depth, optical flow-based warped latent video and the calculated occlusion mask. Lastly, we use optical flows again to align and refine different frames for better video decoding from the latent space to the pixel space. In experiments, MoVideo achieves state-of-the-art results in both text-to-video and image-to-video generation, showing promising prompt consistency, frame consistency and visual quality.


Method Overview

The schematic illustration of the proposed motion-aware video generation (MoVideo) framework. Given a text prompt, we first generate the key frame by Latent Diffusion. Then, we generate video depth and optical flows conditional on the CLIP image embedding and frames per second. Next, we add extra conditions, including depth, flow-based warped latent video and calculated occlusion mask, to generate the video in the latent space. Last, the video is decoded with flow-based alignment and feature refinement.



Comparison on Text-to-Video Generation

Text Prompt
A dog is chasing a ball on a sofa a car is moving on the snow beautiful fireworks over the lake
VideoDiffusion
MoVideo (ours)
Text Prompt
a cow is running on the moon a man is walking on the street, high resolution a fox flying over the rainbow in the sky
VideoDiffusion
MoVideo (ours)


More Results on Text-to-Video Generation

Text Prompt
A panda playing on a swing set beach view lightning and storm
Random Seed=0
Random Seed=1
Random Seed=2


Comparison on Image-to-Video Generation

Key Frame
Gen1
MoVideo (ours)
GT


Generating Videos with Different FPS

Text Prompt
An aircraft is moving on the water A rocket is flying to the sky a boy in purple is skiing on the snow
fps=3
fps=10
fps=17
fps=24

 

Bibtex


    @inproceedings{liang2024movideo,
         author={Liang, Jingyun and Fan, Yuchen and Zhang, Kai and Timofte, Radu and Van Gool, Luc and Ranjan, Rakesh},
         title = {MoVideo: Motion-Aware Video Generation with Diffusion Models},
         booktitle = {European Conference on Computer Vision},
         pages={0000--0000},
         year = 2024
    }