- The paper introduces a modified heterogeneous volume method that fixes integer overflow issues to render large datasets in-core effectively in UE5.
- It compares multiple rendering approaches, demonstrating that the modified SVT method outperforms alternatives in interactivity, accuracy, and visual fidelity.
- It outlines future directions for out-of-core rendering to address VRAM and API limitations, paving the way for advanced large-scale visualizations.
This paper explores and evaluates different methods for rendering large volumetric datasets (billions of voxels) directly within Unreal Engine 5 (UE5), primarily for use in a multi-display visualization dome (ARENA2 at GEOMAR). The goal is to find an interactive, accurate, visually appealing, and extensible solution that integrates with existing UE5-based projects and hardware infrastructure.
The authors outline key requirements, including support for large datasets, high display accuracy (minimal lossy compression), compatibility with other tools, interactive explorability (e.g., transfer functions), and sustainability (low maintenance, potential for future updates).
Several approaches were investigated:
- Custom Implementation: Following online tutorials, a basic ray marching renderer was built. While educational, it suffered from performance bottlenecks due to per-frame lighting calculations and limitations in texture resolution (using pseudo-volume textures).
- TBRayMarcher Plugin: An existing UE plugin, initially for medical data. It uses an efficient illumination cache technique and true volume textures. While offering good performance and visuals for smaller datasets (< 1 gigavoxel), it requires DirectX 11 and struggles with large datasets due to VRAM fragmentation and API limits. Chunking the data into smaller volumes allowed rendering larger datasets but introduced significant visual artifacts (incorrect lighting) at the chunk borders because each chunk's lighting is computed independently. Sustainability was also a concern due to reliance on a single external developer.
- Niagara Fluids Plugin: Adapting UE5's built-in fluid simulation and rendering system. The idea was to leverage its grid-based rendering capabilities, ignoring the simulation aspect. This offered native engine integration and flexibility via the material system. However, it proved impractical due to excessive VRAM consumption caused by the underlying simulation grid allocation (even when unused) and resulted in low effective rendering resolution to avoid crashes.
- Sparse Volume Textures (SVT) / Heterogeneous Volumes (HV): An experimental UE5 feature using OpenVDB files as input. SVTs store data sparsely using a tile-based page table approach, potentially saving memory. Heterogeneous Volumes provide the most suitable rendering path for this data within UE5.
- Initial Problem: Importing large datasets (e.g., the 6 billion voxel Kolumbo seismic dataset, ~2.6 billion non-empty voxels) caused engine crashes. Investigation revealed signed 32-bit integer (
int32) overflows in the engine's SVT import and resource handling code, limiting importable compressed data size to ~2 gigavoxels.
- Solution: The authors modified the UE5 source code, changing problematic
int32 types to uint32 or int64. This fix allowed importing and rendering the entire Kolumbo dataset without chunking.
- Performance & Limits: With the modified engine, the full 6 billion voxel dataset rendered seamlessly at interactive frame rates (~20 fps on an RTX 3500 Ada GPU). The practical limit for this in-core approach was found to be around 4 gigavoxels of compressed tile data (non-empty voxels plus padding and mipmap overhead), constrained by GPU upload buffer addressing (
uint32) and DirectX 12 VRAM allocation limits for volume textures (theoretical max payload ~4.9 gigavoxels after accounting for padding/mipmaps).
- Explorability: The material system allows runtime manipulation of parameters for interactive exploration (e.g., slicing, density scaling, transfer function mapping via Material Parameter Collections and Dynamic Material Instances).
Comparison Table Summary:
| Feature |
TBRM |
Niagara |
Het. Vol. (Vanilla) |
Het. Vol. (Modified) |
| Max Size / Chunk (GV) |
~1 |
~0.45 (low res) |
~1.5 |
>= 6 (sparse limit ~4) |
| Seamless Full Render |
No (artifacts) |
No (artifacts/limits) |
No (workaround slow) |
Yes |
| Accuracy |
Good |
Poor (low res/VRAM) |
Good |
Good |
| Performance (Full) |
Good (but artifacts) |
Poor (low res/VRAM) |
Very Poor (workaround) |
Good (~10-20 fps) |
| Engine Integration |
Plugin |
Native (Beta) |
Native (Exp.) |
Modified Engine |
| Sustainability |
Medium |
Good |
Good |
Low (requires patch) |
Conclusion:
The modified Heterogeneous Volume approach using Sparse Volume Textures was deemed the most suitable solution for rendering large datasets in-core within UE5 for their specific use case. Fixing engine bugs related to integer overflows was crucial to unlock the potential of SVTs for datasets exceeding 2 gigavoxels of compressed data. While this pushes the boundaries of UE5's current in-core capabilities (up to ~4 gigavoxels compressed sparse data), the authors note that true out-of-core rendering support would be necessary to overcome VRAM and API limitations for even larger datasets. Future work includes contributing their engine patch, exploring domain-specific interactions, and investigating dedicated out-of-core solutions like scenery/sciview or ParaView with NVIDIA IndeX, or potentially extending UE5 itself.