Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations (2107.07871v1)

Published 16 Jul 2021 in physics.comp-ph and cs.LG

Abstract: Recently, physics-informed neural networks (PINNs) have offered a powerful new paradigm for solving problems relating to differential equations. Compared to classical numerical methods PINNs have several advantages, for example their ability to provide mesh-free solutions of differential equations and their ability to carry out forward and inverse modelling within the same optimisation problem. Whilst promising, a key limitation to date is that PINNs have struggled to accurately and efficiently solve problems with large domains and/or multi-scale solutions, which is crucial for their real-world application. Multiple significant and related factors contribute to this issue, including the increasing complexity of the underlying PINN optimisation problem as the problem size grows and the spectral bias of neural networks. In this work we propose a new, scalable approach for solving large problems relating to differential equations called Finite Basis PINNs (FBPINNs). FBPINNs are inspired by classical finite element methods, where the solution of the differential equation is expressed as the sum of a finite set of basis functions with compact support. In FBPINNs neural networks are used to learn these basis functions, which are defined over small, overlapping subdomains. FBINNs are designed to address the spectral bias of neural networks by using separate input normalisation over each subdomain, and reduce the complexity of the underlying optimisation problem by using many smaller neural networks in a parallel divide-and-conquer approach. Our numerical experiments show that FBPINNs are effective in solving both small and larger, multi-scale problems, outperforming standard PINNs in both accuracy and computational resources required, potentially paving the way to the application of PINNs on large, real-world problems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ben Moseley (11 papers)
  2. Andrew Markham (94 papers)
  3. Tarje Nissen-Meyer (5 papers)
Citations (162)

Summary

Finite Basis Physics-Informed Neural Networks: A Scalable Domain Decomposition Approach for Solving Differential Equations

The paper introduces a novel approach for solving differential equations using neural networks, specifically addressing the scaling limitations of physics-informed neural networks (PINNs) with large domains and multi-scale solutions. The proposed method, termed Finite Basis Physics-Informed Neural Networks (FBPINNs), leverages domain decomposition inspired by the classical finite element method to overcome the challenges associated with conventional PINNs.

Key Contributions

  1. Domain Decomposition with Neural Networks: FBPINNs utilize overlapping subdomains where several smaller neural networks are trained in parallel. Each subdomain defines a part of the problem domain, and neural networks are employed to learn basis functions specific to each subdomain. This is analogous to finite element methods where domain decomposition accelerates solution finding by handling segments of the domain independently.
  2. Spectral Bias Mitigation: The technique of domain decomposition, coupled with subdomain-specific normalizations, counteracts the spectral bias intrinsic to neural networks. Neural networks tend to struggle with high-frequency content, a common pitfall when scaling PINNs. By operating within smaller normalized subdomains, FBPINNs aim to provide more localized and hence more stable learning environments.
  3. Parallel Training Regimen: FBPINNs parallelize the training of networks across subdomains, commensurately reducing computational complexity. This divide-and-conquer strategy not only accelerates training times but fosters improved efficiency and scalability—particularly pertinent for high-dimensional spaces and large domain applications.
  4. Algorithmic Implementation: The paper details a training algorithm using parallel processing capabilities, incorporating data communications pivotal for ensuring continuity across subdomain interfaces. Such continuity is enforced through construction rather than additional constraints in the loss function, which is innovative compared to methods needing coupling terms for domain interface conformity.

Numerical Results

The experiments demonstrate superior performance of FBPINNs over standard PINNs, across various problem scales. Notably:

  • High-frequency and Large-scale Domains: FBPINNs accurately solve problems with increased accuracy and decreased computational load, as evidenced by cases involving sinusoidal waves with high frequencies and wave equations in complex media.
  • Data Efficiency: By requiring smaller network sizes per subdomain, FBPINNs achieved significant reductions in training FLOPS—a measure of computational efficiency—when compared to PINNs.

Implications and Future Directions

FBPINNs constitute a robust alternative to PINNs, offering promising scalability potential for real-world applications, particularly in scientific fields requiring computation-heavy differential equation solving. Surpassing the barrier inherent in traditional finite difference or finite element methods, FBPINNs might lead to computational reductions that rival or complement these classical approaches.

Future studies could explore further optimization in subdomain definition and network architecture—potentially applying adaptive granularity for dynamic domains—and examine scalability in higher-dimensional contexts. Additionally, integrating advancements like transfer learning could catalyze efficiency in repetitive computations across similar problem setups.

In conclusion, the introduction of FBPINNs marks a significant step towards leveraging machine learning for solving large-scale differential equations, potentially transforming computational strategies in scientific and engineering applications.