Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SetONet: A Deep Set-based Operator Network for Solving PDEs with permutation invariant variable input sampling (2505.04738v1)

Published 7 May 2025 in cs.LG

Abstract: Neural operators, particularly the Deep Operator Network (DeepONet), have shown promise in learning mappings between function spaces for solving differential equations. However, standard DeepONet requires input functions to be sampled at fixed locations, limiting its applicability in scenarios with variable sensor configurations, missing data, or irregular grids. We introduce the Set Operator Network (SetONet), a novel architecture that integrates Deep Sets principles into the DeepONet framework to address this limitation. The core innovation lies in the SetONet branch network, which processes the input function as an unordered \emph{set} of location-value pairs. This design ensures permutation invariance with respect to the input points, making SetONet inherently robust to variations in the number and locations of sensors. SetONet learns richer, spatially-aware input representations by explicitly processing spatial coordinates and function values. We demonstrate SetONet's effectiveness on several benchmark problems, including derivative/anti-derivative operators, 1D Darcy flow, and 2D elasticity. Results show that SetONet successfully learns operators under variable input sampling conditions where standard DeepONet fails. Furthermore, SetONet is architecturally robust to sensor drop-off; unlike standard DeepONet, which requires methods like interpolation to function with missing data. Notably, SetONet can achieve comparable or improved accuracy over DeepONet on fixed grids, particularly for nonlinear problems, likely due to its enhanced input representation. SetONet provides a flexible and robust extension to the neural operator toolkit, significantly broadening the applicability of operator learning to problems with variable or incomplete input data.

Summary

  • The paper introduces SetONet, a neural operator that extends DeepONet using Deep Sets principles to achieve permutation invariance for variable input sensor data when solving PDEs.
  • Experimental results show SetONet maintains accuracy with fixed sensors and performs exceptionally well with variable sensor inputs or drop-off, outperforming DeepONet in robustness.
  • SetONet overcomes the fixed-input limitation of previous operator networks, offering a flexible and robust approach for learning PDE solutions from inconsistent or sparse sensor data in real-world applications.

SetONet: A Deep Set-based Operator Network for Solving PDEs with Permutation Invariant Variable Input Sampling

The paper "SetONet: A Deep Set-based Operator Network for Solving PDEs with permutation invariant variable input sampling" introduces advancements in neural operator architecture aimed at addressing the limitations of existing methodologies. Specifically, the authors propose the Set Operator Network (SetONet), which integrates principles from Deep Sets into the established Deep Operator Network (DeepONet) framework to enhance processing capabilities for variable input sampling of partial differential equation (PDE) solutions.

Core Innovation

The primary innovation of the SetONet architecture centers around its branch network. This component processes input functions in the form of an unordered set of location-value pairs, {(xi,g(xi))}i=1M\{(\boldsymbol{x}_i, g(\boldsymbol{x}_i))\}_{i=1}^M. A significant advantage of such a design is that it ensures permutation invariance with respect to the sensor locations, making SetONet inherently robust to variations in both the number (MM) and spatial distribution (xi\boldsymbol{x}_i) of input sensors. This contrasts with the standard DeepONet architecture, which necessitates fixed sensor positions for data input, often limiting its applicability in real-world scenarios with irregular grid setups or missing data.

Experimental Results

Empirical evaluations illustrate SetONet's effectiveness across multiple benchmark problems: derivative/anti-derivative operators, 1D Darcy flow, and 2D elasticity. Results indicate that SetONet not only maintains high accuracy under fixed sensing conditions but especially excels when handling variable sensor inputs—areas where conventional DeepONet fails. Notably, SetONet demonstrates robustness to sensor drop-off during testing, an essential consideration for practical applications. The superior performance may be attributed to SetONet's ability to leverage richer spatial information, as evidenced by its lower error rates compared to DeepONet in nonlinear PDE problems like Darcy flow.

Architectural Components

SetONet extends the foundational operator learning architecture without altering the core aim: learning mappings between function spaces. Retaining the dual network structure typical to DeepONet, SetONet effectively captures essential characteristics of input functions via two sets of component networks: a branch network maps the input function based on sensor readings, while a trunk network computes basis vectors relative to the query location (y\boldsymbol{y}). However, by implementing the branch network as a permutation-invariant aggregator inspired by Deep Sets—using element-wise transformations and attention-based mechanisms—the architecture ensures consistent operator learning, adaptive to varying input conditions.

Theoretical Implications

The introduction of SetONet signifies progress in treating operator learning using neural networks, expanding the flexibility of solution architectures. By enabling learning across varied sensor configurations without sacrificing efficiency or accuracy, SetONet broadens the horizon, overcoming one of the significant constraints—fixed-input requirement—that hampered previous models like DeepONet.

Practical Implications and Future Directions

SetONet's applicability spans numerous domains where data acquisition is inherently inconsistent or sparse, such as sensor networks or environmental data collection. The framework promises to improve how functional discretizations are processed, offering resilience against potential data losses. Future explorations could focus on optimizing other aspects of SetONet's architecture, such as alternate positional encoding strategies or integrating learned spatial encodings to refine prediction abilities further.

In conclusion, SetONet represents a robust, adaptable extension for neural PDE solvers, accomplished through its set-based permutation invariant input sampling mechanism. This marks a significant advancement within the neural operator domain, as SetONet provides a promising approach to operator learning tasks characterized by non-static or incomplete data formats.