Papers
Topics
Authors
Recent
Search
2000 character limit reached

AniX: Animating Characters in 3DGS Scenes

Updated 23 December 2025
  • AniX is a system for animating characters in static 3DGS scenes using conditional autoregressive video generation, ensuring temporal coherence and visual fidelity.
  • It employs a conditional framework that integrates multi-view character images, scene masks, and textual instructions with a Transformer enhanced by LoRA modules.
  • The approach accelerates inference via diffusion model distillation, reducing a 30-step process to a 4-step schedule with minimal quality loss.

AniX is a system for animating any user-specified character within any static 3D Gaussian Splatting (3DGS) scene under natural language instruction, synthesizing temporally coherent video clips while preserving the visual fidelity and structural grounding of the input scene and character. The core innovation of AniX is the conditional, autoregressive generation of video conditioned on scene, character, mask, and text instruction, enabling open-ended character actions and object-centric interactions that generalize beyond simple locomotion and limited controllability (Wang et al., 18 Dec 2025).

1. Conditional Autoregressive Video Generation

AniX formulates the character animation task as a conditional autoregressive video generation problem. At interaction step ii, the system generates a new video clip ViV^i conditioned on (i) the previous clip Vi1V^{i-1} (if any), (ii) the given static 3DGS scene SS, (iii) multi-view character images CC, (iv) a character anchor mask MM per frame, and (v) the current text instruction TT. In latent token space (after VAE encoding), this conditional generation is represented as:

p(ViVi1,S,C,M,T)p(V^i | V^{i-1}, S, C, M, T)

Decomposed autoregressively over frames v1,,vKv_1,\ldots,v_K:

p(Vi)=kp(vkv<k,Vi1,S,C,M,T)p(V^i | \cdots) = \prod_k p(v_k | v_{<k}, V^{i-1}, S, C, M, T)

Instead of maximizing log-likelihood, AniX minimizes a continuous-time velocity-matching loss (“Flow Matching”). Defining:

  • X0\mathcal{X}_0 as noise N(0,I)\sim \mathcal{N}(0, I),
  • X1\mathcal{X}_1 as the ground-truth token sequence TV\mathcal{T}_V,
  • For t[0,1]t \in [0,1], Xt=(1t)X0+tTV\mathcal{X}_t = (1-t)\mathcal{X}_0 + t\mathcal{T}_V,
  • Ground-truth “velocity”: ut:=dXt/dt=TVX0u_t := d\mathcal{X}_t/dt = \mathcal{T}_V - \mathcal{X}_0,

the model fθ(Xt,t;S,C,M,T,Vi1)f_\theta(\mathcal{X}_t, t; S, C, M, T, V^{i-1}) is trained to match utu_t via MSE:

L(θ)=Et,X0,TVfθ(Xt,t;S,C,M,T,Vi1)(TVX0)2\mathcal{L}(\theta) = \mathbb{E}_{t,\mathcal{X}_0,\mathcal{T}_V} \Big\| f_\theta(\mathcal{X}_t, t; S, C, M, T, V^{i-1}) - (\mathcal{T}_V - \mathcal{X}_0) \Big\|^2

Conditioning includes projecting scene and mask tokens into the token feature space and concatenating token embeddings for text, character views, previous video, and input tokens before processing with the Transformer.

2. Model Backbone and Conditional Encoding

AniX is based on a pre-trained “HunyuanCustom” video generator, consisting of a VQ-VAE and a Multimodal Diffusion Transformer (MMDiT). The VQ-VAE encoder reduces input spatially by ×8 and temporally by ×4. The decoder and the full-attention Transformer stack (≈13B parameters) are frozen, except for LoRA modules (rank 64) which are trainable within each attention and feed-forward layer.

The system encodes its conditions as follows:

  • Scene (SS): Rendered as a “scene video” by splatting the input 3DGS along a predefined camera path, then VAE-encoded into token sequence TS\mathcal{T}_S.
  • Character (CC): Represented by four canonical multi-view images (front, left, right, back); each view is VAE-encoded into tokens TCF,...,TCB\mathcal{T}_{C_F},...,\mathcal{T}_{C_B}.
  • Mask (MM): Per-frame binary mask around the character, VAE-encoded into tokens TM\mathcal{T}_M to help delineate “dynamic” from “static” regions.
  • Text (TT): Encoded with a frozen LLaVA multi-modal encoder, using both the instruction and the set of character views to produce text-token embeddings TT\mathcal{T}_T.

The fusion strategy involves projecting TS\mathcal{T}_S and TM\mathcal{T}_M and summing them with Xt\mathcal{X}_t, then concatenating all condition tokens along the sequence for Transformer input.

3. Temporal Coherence and Autoregressive Conditioning

To support long-horizon character behaviors and robust temporal coherence, AniX employs an autoregressive mode. Each clip’s target tokens (TV\mathcal{T}_V) are split temporally: the first quarter (TV,1\mathcal{T}_{V,1}) and the remaining three-quarters (TV,24\mathcal{T}_{V,2-4}). During training, TV,1\mathcal{T}_{V,1} (with Gaussian jitter) is used as extra conditioning for predicting TV,24\mathcal{T}_{V,2-4}, in conjunction with S,C,M,TS,C,M,T. At inference, TV,1\mathcal{T}_{V,1} is taken from the previously generated output, enforcing inter-clip consistency.

Standard 3D rotary positional embeddings (3D-RoPE; time×height×width) are applied to video tokens. For character-view sequences, “shifted” 3D-RoPE prevents positional embedding collisions between views. No positional embeddings are applied to text tokens.

4. Training Workflow and Data

AniX is trained in two distinct stages:

  • Stage 1: The base model ("HunyuanCustom") is pre-trained on large-scale, broad-coverage text-to-video data.
  • Stage 2: Fine-tuning is conducted only on LoRA modules using a curated GTA-V dataset (“locomotion-and-camera” post-training), sharpening motion dynamics and camera tracking without compromising generalization.

The training data pipeline involves:

  • 2,084 GTA-V gameplay clips (129 frames/clip, five characters, 4 locomotion + 2 camera motion patterns).
  • For each clip: character segmentation to create mask MM, inpainting the background for isolated scene video SS, labeling with a short action text TT, and rendering 3DVS character models as four multi-view images CC.

Key training strategies include scene-condition dropout (p=0.3p=0.3), Gaussian jitter on preceding video tokens for robust autoregressive conditioning, and minimal regularization due to reliance on the frozen foundation model’s priors.

5. Acceleration and Inference Optimization

After full model training, AniX applies diffusion model distillation (DMD2) to convert the original 30-step diffusion schedule into a 4-step process, resulting in approximately 7×7\times faster inference with minimal visual or temporal fidelity loss. Only LoRA modules within the student and fake-score networks are fine-tuned during distillation.

6. Evaluation and Capabilities

AniX is evaluated on visual quality, character consistency, action controllability, and long-horizon coherence. The system is designed to generalize across actions and characters, providing user-driven, text-conditioned animation in complex 3DGS environments. Users can direct a character across a 3D scene to perform diverse actions—ranging from basic locomotion to object-centric behaviors—over arbitrary time horizons, with each clip seamlessly building on prior context while maintaining structural integrity and visual continuity throughout (Wang et al., 18 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to AniX.