A framework of discontinuous Galerkin neural networks for iteratively approximating residuals (2511.06349v1)
Abstract: We propose an abstract discontinuous Galerkin neural network (DGNN) framework for analyzing the convergence of least-squares methods based on the residual minimization when feasible solutions are neural networks. Within this framework, we define a quadratic loss functional as in the least square method with $h-$refinement and introduce new discretization sets spanned by element-wise neural network functions. The desired neural network approximate solution is recursively supplemented by solving a sequence of quasi-minimization problems associated with the underlying loss functionals and the adaptively augmented discontinuous neural network sets without the assumption on the boundedness of the neural network parameters. We further propose a discontinuous Galerkin Trefftz neural network discretization (DGTNN) only with a single hidden layer to reduce the computational costs. Moreover, we design a template based on the considered models for initializing nonlinear weights. Numerical experiments confirm that compared to existing PINN algorithms, the proposed DGNN method with one or two hidden layers is able to improve the relative $L2$ error by at least one order of magnitude at low computational costs.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.