Solving High-Dimensional PDEs Using Linearized Neural Networks
Abstract: Linearized shallow neural networks that are constructed by fixing the hidden-layer parameters have recently shown strong performance in solving partial differential equations (PDEs). Such models, widely used in the random feature method (RFM) and extreme learning machines (ELM), transform network training into a linear least-squares problem. In this paper, we conduct a numerical study of the variational (Galerkin) and collocation formulations for these linearized networks. Our numerical results reveal that, in the variational formulation, the associated linear systems are severely ill-conditioned, forming the primary computational bottleneck in scaling the neural network size, even when direct solvers are employed. In contrast, collocation methods combined with robust least-squares solvers exhibit better numerical stability and achieve higher accuracy as we increase neuron numbers. This behavior is consistently observed for both ReLU$k$ and $\tanh$ activations, with $\tanh$ networks exhibiting even worse conditioning. Furthermore, we demonstrate that random sampling of the hidden layer parameters, commonly used in RFM and ELM, is not necessary for achieving high accuracy. For ReLU$k$ activations, this follows from existing theory and is verified numerically in this paper, while for $\tanh$ activations, we introduce two deterministic schemes that achieve comparable accuracy.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.