MY ALT TEXT

Abstract

Character image animation, which generates high-quality videos from a reference image and target pose sequence, has seen significant progress in recent years. However, most existing methods only apply to human figures, which usually do not generalize well on anthropomorphic characters commonly used in industries like gaming and entertainment. Our in-depth analysis suggests to attribute this limitation to their insufficient modeling of motion, which is unable to comprehend the movement pattern of the driving video, thus imposing a pose sequence rigidly onto the target character. To this end, this paper proposes \(\texttt{Animate-X}\), a universal animation framework based on LDM for various character types (collectively named \(\texttt{X}\)), including anthropomorphic characters. To enhance motion representation, we introduce the Pose Indicator, which captures comprehensive motion pattern from the driving video through both implicit and explicit manner. The former leverages CLIP visual features of a driving video to extract its gist of motion, like the overall movement pattern and temporal relations among motions, while the latter strengthens the generalization of LDM by simulating possible inputs in advance that may arise during inference. Moreover, we introduce a new Animated Anthropomorphic Benchmark (\(\texttt{$A^2$Bench}\)) to evaluate the performance of \(\texttt{Animate-X}\) on universal and widely applicable animation images. Extensive experiments demonstrate the superiority and effectiveness of \(\texttt{Animate-X}\) compared to state-of-the-art methods.

Overall Framework of Animate-X

MY ALT TEXT

The overview of our Animate-X. Given a reference image \(I^r\), we first extract CLIP image feature \(f^r_{\varphi}\) and latent feature \(f^r_{e}\) via CLIP image encoder \(\Phi\) and VAE encoder \(\mathcal{E}\). The proposed Implicit Pose Indicator (IPI) and Explicit Pose Indicator (EPI) produce motion feature \(f_i\) and pose feature \(f_e\), respectively. \(f_e\) is concatenated with the noised input \(\epsilon\) along the channel dimension, then further concatenated with \(f^r_{e}\) along the temporal dimension. This serves as the input to the diffusion model \(\epsilon_\theta\) for progressive denoising. During the denoising process, \(f^r_{\varphi}\) and \(f_i\) provide appearance information from \(I^r\) and motion information from \(I^d_{1:F}\), respectively. At last, a VAE decoder \(\mathcal{D}\) is adopted to map the generated latent representation \(z_0\) to the animation video.

Overview of Animate-X

Detailed pipeline of training and inference (This video contains audio)

Comparison with SOTA methods

Animating anthropomorphic characters in fancy poster

Animating anthropomorphic characters in games and cartoons

Animating anthropomorphic characters in \( A^2Bench \)

Animating human-like characters

Animating long videos

Reference

      
@article{tan2024AnimateX,
title={Animate-X: Universal Character Image Animation with Enhanced Motion Representation},
author={Tan, Shuai and Gong, Biao and Wang, Xiang and Zhang, Shiwei and Zheng, Dandan and Zheng, Ruobin and Zheng, Kecheng and Chen, Jingdong and Yang, Ming},
journal={arXiv preprint arXiv:2410.10306},
year={2024}}
        
@article{wang2024Unianimate,
title={Unianimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation},
author={Wang, Xiang and Zhang, Shiwei and Gao, Changxin and Wang, Jiayu and Zhou, Xiaoqiang and Zhang, Yingya and Yan, Luxin and Sang, Nong},
journal={arXiv preprint arXiv:2406.01188},
year={2024}}