StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2

CVPR 2022

Ivan SkorokhodovSergey TulyakovMohamed Elhoseiny

Abstract

Videos show continuous events, yet most — if not all — video synthesis frameworks treat them discretely in time. In this work, we think of videos of what they should be — time-continuous signals, and extend the paradigm of neural representations to build a continuous-time video generator. For this, we first design continuous motion representations through the lens of positional embeddings. Then, we explore the question of training on very sparse videos and demonstrate that a good generator can be learned by using as few as 2 frames per clip. After that, we rethink the traditional image and video discriminators pair and propose to use a single hypernetwork-based one. This decreases the training cost and provides richer learning signal to the generator, making it possible to train directly on 1024x1024 videos for the first time. We build our model on top of StyleGAN2 and it is just 10% more expensive to train at the same resolution while achieving almost the same image quality. Moreover, our latent space features similar properties, enabling spatial manipulations that our method can propagate in time. We can generate arbitrarily long videos at arbitrary high frame rate, while prior work struggles to generate even 64 frames at a fixed rate. Our model achieves state-of-the-art results on four modern 256x256 video synthesis benchmarks and one 1024x1024 resolution one.

Note: please, use the latest version of Chrome/Chromium or Safari to watch the videos (alternatively, you can download a video and watch it offline). Some of the videos can be displayed incorrectly in other web browsers (e.g., Firefox).


Random videos on FaceForensics 256x256

StyleGAN-V (ours)

MoCoGAN + StyleGAN2 backbone

MoCoGAN-HD

VideoGPT

DIGAN

StyleGAN-V without our positional embeddings with continuous LSTM codes with δz = 1 instead

StyleGAN-V without our positional embeddings with continuous LSTM codes with δz = 16 instead


Random videos on SkyTimelapse 256x256

StyleGAN-V (ours)

MoCoGAN + StyleGAN2 backbone

MoCoGAN-HD

VideoGPT

DIGAN

StyleGAN-V without our positional embeddings with continuous LSTM codes with δz = 1 instead

StyleGAN-V without our positional embeddings with continuous LSTM codes with δz = 16 instead


Random videos on RainbowJelly 256x256

StyleGAN-V (ours)

MoCoGAN + StyleGAN2 backbone

MoCoGAN-HD

VideoGPT

DIGAN


Random videos on UCF101 256x256

StyleGAN-V (ours)

MoCoGAN-HD

VideoGPT

DIGAN


Random videos on MEAD 1024x1024

MoCoGAN-HD

StyleGAN-V (ours)


Projecting off-the-shelf images into our model.

Note: FaceForensics is much more limited dataset than FFHQ (only ~700 identites!), that's why our projection results are inferior to those of StyleGAN2


Video editing using CLIP

Text prompts used (left-to-right, top-to-bottom): «An old person», «A person with makeup», «A person with a purple t-shirt»

Left: original video. Right: edited with «A person with a beard»

Left: original video. Right: edited with «A person with blue eyes»

Left: original video. Right: edited with «An old person»

Left: original video. Right: edited with «Bright sun»

Left: original video. Right: edited with «Very cloudy day»

Left: original video. Right: edited with «Aurora»


Increasing the frame rate by x5 on random samples on FaceForensics 256x256

Increasing the frame rate by x5


Content/motion decomposition

Content/motion decomposition for FaceForensics 256x256 (left) and SkyTimelapse 256x256 (right). Each row is a different content code, while each column is a different set of motion codes. Note that our method captures temporal patterns not only in terms of motion, but also appearance changes, like time of day.


Real videos

Real videos for RainbowJelly 256x256

Real videos for MEAD 256x256. Note that heads have static positions

BibTeX

@inproceedings{stylegan-v,
    title={Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2},
    author={Skorokhodov, Ivan and Tulyakov, Sergey and Elhoseiny, Mohamed},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={3626--3636},
    year={2022}
}
@inproceedings{digan,
    title={Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks},
    author={Sihyun Yu and Jihoon Tack and Sangwoo Mo and Hyunsu Kim and Junho Kim and Jung-Woo Ha and Jinwoo Shin},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=Czsdv-S4-w9}
}