Papers
Topics
Authors
Recent
Search
2000 character limit reached

Rendering Large Volume Datasets in Unreal Engine 5: A Survey

Published 10 Apr 2025 in cs.GR | (2504.07485v1)

Abstract: In this technical report, we discuss several approaches to in-core rendering of large volumetric datasets in Unreal Engine 5 (UE5). We explore the following methods: the TBRayMarcher Plugin, the Niagara Fluids Plugin , and various approaches using Sparse Volume Textures (SVT), with a particular focus on Heterogeneous Volumes (HV). We found the HV approach to be the most promising. The biggest challenge we encountered with other approaches was the need to chunk datasets so that each fits into volume textures smaller than one gigavoxel. While this enables display of the entire dataset at reasonable frame rates, it introduces noticeable artifacts at chunk borders due to incorrect lighting, as each chunk lacks information about its neighbors. After addressing some (signed) int32 overflows in the Engine's SVT-related source code by converting them to to (unsigned) uint32 or int64, the SVT-based HV system allows us to render sparse datasets up to 32k x 32k x 16k voxels, provided the compressed tile data (including MIP data and padding for correct interpolation) does not exceed 4 gigavoxels. In the future, we intend to extend the existing SVT streaming functionality to support out-of-core rendering, in order to eventually overcome VRAM limitations, graphics API constraints, and the performance issues associated with 64-bit arithmetic in GPU shaders.

Summary

  • The paper introduces a modified heterogeneous volume method that fixes integer overflow issues to render large datasets in-core effectively in UE5.
  • It compares multiple rendering approaches, demonstrating that the modified SVT method outperforms alternatives in interactivity, accuracy, and visual fidelity.
  • It outlines future directions for out-of-core rendering to address VRAM and API limitations, paving the way for advanced large-scale visualizations.

This paper explores and evaluates different methods for rendering large volumetric datasets (billions of voxels) directly within Unreal Engine 5 (UE5), primarily for use in a multi-display visualization dome (ARENA2 at GEOMAR). The goal is to find an interactive, accurate, visually appealing, and extensible solution that integrates with existing UE5-based projects and hardware infrastructure.

The authors outline key requirements, including support for large datasets, high display accuracy (minimal lossy compression), compatibility with other tools, interactive explorability (e.g., transfer functions), and sustainability (low maintenance, potential for future updates).

Several approaches were investigated:

  1. Custom Implementation: Following online tutorials, a basic ray marching renderer was built. While educational, it suffered from performance bottlenecks due to per-frame lighting calculations and limitations in texture resolution (using pseudo-volume textures).
  2. TBRayMarcher Plugin: An existing UE plugin, initially for medical data. It uses an efficient illumination cache technique and true volume textures. While offering good performance and visuals for smaller datasets (< 1 gigavoxel), it requires DirectX 11 and struggles with large datasets due to VRAM fragmentation and API limits. Chunking the data into smaller volumes allowed rendering larger datasets but introduced significant visual artifacts (incorrect lighting) at the chunk borders because each chunk's lighting is computed independently. Sustainability was also a concern due to reliance on a single external developer.
  3. Niagara Fluids Plugin: Adapting UE5's built-in fluid simulation and rendering system. The idea was to leverage its grid-based rendering capabilities, ignoring the simulation aspect. This offered native engine integration and flexibility via the material system. However, it proved impractical due to excessive VRAM consumption caused by the underlying simulation grid allocation (even when unused) and resulted in low effective rendering resolution to avoid crashes.
  4. Sparse Volume Textures (SVT) / Heterogeneous Volumes (HV): An experimental UE5 feature using OpenVDB files as input. SVTs store data sparsely using a tile-based page table approach, potentially saving memory. Heterogeneous Volumes provide the most suitable rendering path for this data within UE5.
    • Initial Problem: Importing large datasets (e.g., the 6 billion voxel Kolumbo seismic dataset, ~2.6 billion non-empty voxels) caused engine crashes. Investigation revealed signed 32-bit integer (int32) overflows in the engine's SVT import and resource handling code, limiting importable compressed data size to ~2 gigavoxels.
    • Solution: The authors modified the UE5 source code, changing problematic int32 types to uint32 or int64. This fix allowed importing and rendering the entire Kolumbo dataset without chunking.
    • Performance & Limits: With the modified engine, the full 6 billion voxel dataset rendered seamlessly at interactive frame rates (~20 fps on an RTX 3500 Ada GPU). The practical limit for this in-core approach was found to be around 4 gigavoxels of compressed tile data (non-empty voxels plus padding and mipmap overhead), constrained by GPU upload buffer addressing (uint32) and DirectX 12 VRAM allocation limits for volume textures (theoretical max payload ~4.9 gigavoxels after accounting for padding/mipmaps).
    • Explorability: The material system allows runtime manipulation of parameters for interactive exploration (e.g., slicing, density scaling, transfer function mapping via Material Parameter Collections and Dynamic Material Instances).

Comparison Table Summary:

Feature TBRM Niagara Het. Vol. (Vanilla) Het. Vol. (Modified)
Max Size / Chunk (GV) ~1 ~0.45 (low res) ~1.5 >= 6 (sparse limit ~4)
Seamless Full Render No (artifacts) No (artifacts/limits) No (workaround slow) Yes
Accuracy Good Poor (low res/VRAM) Good Good
Performance (Full) Good (but artifacts) Poor (low res/VRAM) Very Poor (workaround) Good (~10-20 fps)
Engine Integration Plugin Native (Beta) Native (Exp.) Modified Engine
Sustainability Medium Good Good Low (requires patch)

Conclusion:

The modified Heterogeneous Volume approach using Sparse Volume Textures was deemed the most suitable solution for rendering large datasets in-core within UE5 for their specific use case. Fixing engine bugs related to integer overflows was crucial to unlock the potential of SVTs for datasets exceeding 2 gigavoxels of compressed data. While this pushes the boundaries of UE5's current in-core capabilities (up to ~4 gigavoxels compressed sparse data), the authors note that true out-of-core rendering support would be necessary to overcome VRAM and API limitations for even larger datasets. Future work includes contributing their engine patch, exploring domain-specific interactions, and investigating dedicated out-of-core solutions like scenery/sciview or ParaView with NVIDIA IndeX, or potentially extending UE5 itself.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.