Papers
Topics
Authors
Recent
Search
2000 character limit reached

A$^2$TG: Adaptive Anisotropic Textured Gaussians for Efficient 3D Scene Representation

Published 14 Jan 2026 in cs.CV | (2601.09243v1)

Abstract: Gaussian Splatting has emerged as a powerful representation for high-quality, real-time 3D scene rendering. While recent works extend Gaussians with learnable textures to enrich visual appearance, existing approaches allocate a fixed square texture per primitive, leading to inefficient memory usage and limited adaptability to scene variability. In this paper, we introduce adaptive anisotropic textured Gaussians (A$2$TG), a novel representation that generalizes textured Gaussians by equipping each primitive with an anisotropic texture. Our method employs a gradient-guided adaptive rule to jointly determine texture resolution and aspect ratio, enabling non-uniform, detail-aware allocation that aligns with the anisotropic nature of Gaussian splats. This design significantly improves texture efficiency, reducing memory consumption while enhancing image quality. Experiments on multiple benchmark datasets demonstrate that A TG consistently outperforms fixed-texture Gaussian Splatting methods, achieving comparable rendering fidelity with substantially lower memory requirements.

Summary

  • The paper introduces a gradient-guided adaptive anisotropic textured Gaussian framework that optimizes memory usage and image quality.
  • It employs dynamic texture allocation based on per-primitive gradients to upscale resolution only in high-frequency, anisotropic regions.
  • Results show higher PSNR and SSIM with reduced memory overhead, enabling efficient, real-time 3D scene reconstruction.

Adaptive Anisotropic Textured Gaussians (A2^2TG): Memory-Efficient High-Fidelity 3D Scene Representation

Introduction

The paper introduces Adaptive Anisotropic Textured Gaussians (A2^2TG), a new framework for efficient 3D scene representation using anisotropic textured Gaussian primitives. The approach innovates on Gaussian Splatting paradigms by equipping each splat with an adaptively selected anisotropic texture, determined via gradient-guided rules aligned to the geometric properties of each primitive. This generalization addresses critical memory overheads and texture inefficiencies in prior Gaussian splatting methods, especially those that attach uniform, fixed-size square textures to primitives, regardless of their spatial or frequency content.

Methodology

A2^2TG builds on 2D Gaussian Splatting (2DGS), leveraging its benefits in explicit geometry and local UV parameterization. The core innovation lies in per-primitive texture adaptation:

  • Gradient-Guided Texture Allocation: The system tracks image gradients and geometry for each splat, dynamically upscaling texture resolution and aspect ratio only where high-frequency appearance or directional content is detected. This fine-grained control avoids superfluous parameter allocation, focusing texture capacity where reconstruction quality demands it.
  • Anisotropic Texture Mapping: Textures are not constrained to squares. The framework computes semi-axis ratios for each Gaussian and assigns rectangular textures with dimensions Tu×TvT_u \times T_v chosen to match the anisotropy of each primitive’s footprint in screen space.
  • Iterative Upscaling and Optimization: An MCMC densification process is used during pretraining to control Gaussian count. Following this, texture parameters and Gaussian attributes are optimized iteratively, with further texture upscaling driven by accumulated per-pixel gradients. Anisotropic upscaling decisions are applied every 500 iterations, initializing new texture pixels by bilinear interpolation from existing textures.

Numerical Results

A2^2TG demonstrates distinct improvements in memory efficiency and image quality across benchmarks including Mip-NeRF 360, Tanks and Temples, and DeepBlending datasets. Under fixed memory constraints (e.g., 200MB), A2^2TG consistently achieves higher PSNR and SSIM and lower LPIPS than competing baselines, particularly fixed-texture Gaussian methods. For instance, at 200MB, A2^2TG delivers a PSNR of 29.86 on DeepBlending with only 189.42MB memory usage, outperforming Textured Gaussians (PSNR 29.51, 200MB).

When comparing on a fixed Gaussian count, A2^2TG approaches the best-performing baselines in visual fidelity while using substantially less memory. At 500k Gaussians, A2^2TG matches PSNR/SSIM of Textured Gaussians with only 28–32% memory increase over 2DGS, while Textured Gaussians typically require +110% overhead.

Ablation studies clarify the contributions of adaptive resolution scaling and anisotropy, showing that disabling either increases memory cost or degrades reconstruction quality. Notably, 62.4% of Gaussians remain with minimal 1×1 textures, with upscaled textures predominantly assigned to high-gradient, anisotropic regions such as edges.

Practical and Theoretical Implications

A2^2TG pushes the boundaries in textured Gaussian splatting by demonstrating that dynamic, detail-and-geometry-aware texture allocation is central to scalable scene modeling. By concentrating resources adaptively, the method enables high-fidelity scene reconstructions even given tight memory budgets, a critical requirement for real-time and embedded rendering systems.

The methodology is orthogonal and complementary to prior primitive-count and attribute compression techniques; integrating A2^2TG with advanced compression pipelines could further improve deployability on resource-constrained hardware. Beyond memory gains, the adaptive approach generalizes well: future extensions could include dynamic texture downscaling, integration with more flexible primitive shapes such as Deformable Radial Kernel Splatting, and 4DGS temporal modeling.

Of particular note is A2^2TG’s effectiveness in packing variable textures into a dense atlas structure for GPU rendering, sustaining real-time framerates (over 30 FPS) and outperforming fixed-texture baselines in inference throughput under high Gaussian counts.

Implications for Future AI Research

The gradient-based adaptive mechanism demonstrated in A2^2TG is emblematic of a more general principle: data-driven, spatially local resource allocation yields efficient and effective scene representations. This could inspire future AI-driven graphics pipelines that place more emphasis on locally adaptive parameterization, not only for textures but for geometry, shading, and semantics.

Moreover, incorporating such adaptive representations into neural rendering or generative view synthesis frameworks (e.g., NeRF variants) may bridge gaps in scalability, memory robustness, and real-time usability. Hybrid models combining 2DGS/3DGS with learned adaptive primitives may offer an attractive avenue for high-quality, memory-efficient 3D modeling suitable for mobile AR/VR, robotics, and simulation environments.

Conclusion

A2^2TG introduces an impactful advancement in 3D scene representation, generalizing textured Gaussian splatting with anisotropic, adaptively determined textures. The framework achieves a compelling trade-off between memory consumption and rendering fidelity, establishing new standards for memory-efficient, detail-rich novel view synthesis. By adopting principled, gradient-guided texture allocation, A2^2TG improves upon previous fixed-texture paradigms in both practical deployment and theoretical efficiency, with wide-ranging prospects for further research in adaptive scene modeling and hybrid neural representation.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 21 likes about this paper.