Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations (2506.06858v1)

Published 7 Jun 2025 in cs.LG and cs.AI

Abstract: Effective surrogate models are critical for accelerating scientific simulations. Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data, but they often struggle with complex scientific fields exhibiting localized, high-frequency variations. Recent approaches address this by introducing additional features along rigid geometric structures (e.g., grids), but at the cost of flexibility and increased model size. In this paper, we propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR). FA-INR leverages cross-attention to an augmented memory bank to learn flexible feature representations, enabling adaptive allocation of model capacity based on data characteristics, rather than rigid structural assumptions. To further improve scalability, we introduce a coordinate-guided mixture of experts (MoE) that enhances the specialization and efficiency of feature representations. Experiments on three large-scale ensemble simulation datasets show that FA-INR achieves state-of-the-art fidelity while significantly reducing model size, establishing a new trade-off frontier between accuracy and compactness for INR-based surrogates.

Summary

  • The paper introduces FA-INR, an adaptive framework using cross-attention on an augmented memory bank to enhance surrogate model fidelity in scientific simulations.
  • It incorporates a coordinate-guided mixture of experts to optimize parameter allocation and capture intricate spatial features effectively.
  • Experimental results on large-scale ensemble datasets demonstrate a superior trade-off between accuracy and model compactness compared to existing approaches.

High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations

The paper "High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations" presents a novel framework for enhancing the fidelity of surrogate models in scientific simulations with implicit neural representations (INRs). Acknowledging the computational challenges associated with high-fidelity simulations, the authors propose an innovative approach to improve surrogate modeling while maintaining a compact model architecture.

The proposed framework, Feature-Adaptive INR (FA-INR), leverages adaptive encodings through cross-attention mechanisms on an augmented memory bank. This approach allows the model to learn flexible feature representations and adapt its capacity allocation based on data characteristics rather than relying on rigid geometric structures like grids. The introduction of a coordinate-guided mixture of experts (MoE) mechanism further enhances model specialization, thereby reducing model size without compromising fidelity.

Key Contributions

  1. Feature-Adaptive INR (FA-INR): The paper introduces FA-INR, which employs cross-attention on an augmented key-value memory bank. This adaptive mechanism improves the model’s ability to represent high-frequency variations and fine-scale structures without relying on pre-defined grids. The approach allows for data-driven allocation of model capacity, enhancing both flexibility and compactness.
  2. Mixture of Experts Integration: The authors incorporate a MoE framework within the FA-INR architecture. This feature allows the division of the memory bank into multiple specialist expert groups, optimizing feature extraction by routing input data based on spatial characteristics. This strategy results in a more effective use of model parameters and improved efficiency in capturing intricate data structures.
  3. Competitive Model Performance: The model was tested on three large-scale ensemble simulation datasets spanning oceanography, cosmology, and fluid dynamics. FA-INR consistently achieved state-of-the-art performance, significantly improving the trade-off frontier between accuracy and compactness for INR-based surrogates. Notably, FA-INR surpassed other models while considerably reducing model size, highlighting its efficiency and applicability for larger simulations.

Implications and Future Directions

This paper exemplifies the potential of INRs in advancing surrogate modeling for complex scientific simulations by addressing fidelity and model size, two critical challenges. The introduction of cross-attention mechanisms and specialized expert components in the modeling framework significantly contributes to this endeavor.

In practical terms, this advancement fosters more efficient scientific exploration and hypothesis testing, particularly in scenarios where intensive simulations are prohibitive. The implications of such an adaptable and compact modeling approach are profound for fields requiring rapid iteration and exploration, such as climate science, where the need to model numerous environmental conditions swiftly and accurately is pertinent.

Looking ahead, the authors suggest several avenues for future research, such as expanding the evaluation to additional datasets and refining the FA-INR framework to improve training efficiency and speed. Addressing these aspects could further enhance the applicability and utility of INR-based surrogates across broader scientific domains.

Conclusion

The paper provides a significant step forward in surrogate model design for scientific simulations by proposing an adaptive and efficient architecture in FA-INR. By integrating cross-attention mechanisms and a MoE framework, the authors successfully enhance model fidelity, representing a substantial contribution to the field of computational modeling and AI-driven scientific exploration.