Backward Euler & Newton's Method for DAEs
- Backward Euler Discretization with Newton's Method is a numerical approach that robustly simulates stiff differential-algebraic systems by enforcing physical constraints.
- It employs an implicit time-stepping scheme combined with an iterative Newton solver to ensure numerical stability and preserve energy-consistent invariants.
- The method supports data-driven system identification in port-Hamiltonian neural networks, reliably recovering physical parameters even in noisy environments.
Backward Euler Discretization with Newton's Method is a numerical approach for simulating constrained dynamical systems formulated as differential-algebraic equations (DAEs), frequently appearing in physics-informed machine learning and port-Hamiltonian neural networks (pHNNs). This method is particularly suited for stiff and algebraically coupled models, such as those arising in electrical networks, mechanical systems, and multi-component structures. Recent developments incorporate structural guarantees from the port-Hamiltonian framework, enabling physically-consistent, data-driven identification and robust long-horizon simulation (Hagelaars et al., 23 Jan 2026).
1. Differential-Algebraic Port-Hamiltonian Systems
Physically-realistic network models—e.g., electrical circuits constrained by Kirchhoff’s laws—are naturally described by DAEs,
where encode the energy metric and possible singularity (i.e., algebraic constraints), is the interconnection matrix (energy-conserving), captures dissipation, and maps external inputs to energy ports. Algebraic constraints are induced by being rank-deficient, enforcing, for example, instantaneous relations among voltages and currents.
The port-Hamiltonian structure ensures that energy balance (dissipation inequality), compositionality, and passivity are preserved by construction for all parameter choices. Parameterization via neural networks—with structured symmetries and positive definiteness—enables robust data-driven identification of unknown or nonlinear subsystems (Neary et al., 2024, Hagelaars et al., 23 Jan 2026).
2. Backward Euler Discretization of DAEs
Simulating DAEs requires careful discretization to maintain numerical stability and consistency with algebraic constraints. Backward Euler is an implicit time-stepping scheme formulated as
where is the time-step. Unlike explicit methods, backward Euler robustly handles stiffness and the fast transients associated with singular , naturally stabilizing the integration and honoring the instantaneous algebraic relations. The implicit nonlinear equation for cannot be solved directly in closed form when are state-dependent and/or neural network outputs.
In modern implementations (see (Hagelaars et al., 23 Jan 2026)), each neural parameter (or matrix factor) is chosen via unconstrained matrices/factors, enforcing network constraints such as skew-symmetry, positive-semi-definiteness, and block-sparsity topologically.
3. Newton's Method for Nonlinear Implicit Equations
Newton's method is applied at each time-step to solve the implicit nonlinear system defined by backward Euler discretization: Iteratively, the update is
where is the Jacobian of the residual with respect to . Convergence is typically declared when for a chosen tolerance. All residual evaluations, Jacobian calculations, and matrix operations are performed in a differentiable (autograd-enabled) manner to support gradient-based optimization of neural parameters.
This iterative nonlinear solver accommodates highly parameterized, state-dependent models, making the approach scalable to large systems with mixed algebraic and differential states. The algorithm is robust to noise in input-output data and does not require explicit knowledge of internal states (Hagelaars et al., 23 Jan 2026).
4. Data-Driven Identification and Loss Functions
In the context of port-Hamiltonian neural networks for system identification, only noisy output y and input u measurements are assumed available. An encoder network (e.g., SUBNET) is used to reconstruct the initial state for each simulated trajectory segment, typically by embedding a short history of observed outputs and controls.
Given a mini-batch of input/output sequences, the loss is an output-error simulation criterion: where is the simulated output from the DAE system integrated via backward Euler and Newton's method. Optimization jointly adjusts both the pHNN parameters and encoder parameters , relying on the differentiable structure of the DAE solver (including Newton steps).
The approach exhibits noise-robustness, as shown in electrical network benchmarks: normalized RMS error (NRMS) is proportional to the noise amplitude regardless of SNR, and physical parameters recovered via the learned , factors match ground truth to within a small percentage error (Hagelaars et al., 23 Jan 2026).
5. Structural Guarantees and Practical Benefits
The method enforces structural invariants of port-Hamiltonian systems via specific matrix factorizations:
- via skew-symmetrization: ,
- , via Gram–Cholesky factorization: , ,
- typically set to identity for identification.
These guarantee that the learned model remains passive, energy-consistent, and physically interpretable throughout training. The compositional property allows modular construction and assembly of larger systems from individually trained subsystems, facilitating scalable modeling in domains such as microgrid simulation, circuit design, and composite multiphysics systems (Neary et al., 2024).
The backward Euler–Newton scheme bypasses the numerical instability and constraint violation issues common in explicit or unconstrained ODE/NN approaches. All steps remain amenable to automatic differentiation for applied machine learning, including gradient-based trajectory loss minimization.
6. Limitations and Research Directions
While highly effective for index-one DAEs (i.e., systems where the constraint Jacobian is invertible), backward Euler with Newton's method is not directly applicable to higher-index systems without further regularization or index-reduction technique. Computational cost scales with the size and complexity of the DAE, and the repeated evaluation of neural networks and their Jacobians may require custom implementation for very large-scale systems.
Current implementations focus on quadratic Hamiltonians for efficiency; generalization to nonlinear, state-dependent would increase expressivity but potentially exacerbate identifiability and numerical challenges. Efficient solvers, parallel Newton-step computation, and specialized differentiable matrix algebra routines are active areas of development.
Extending backward Euler–Newton capabilities to stochastic pHNNs, uncertainty quantification, and fully modular component-wise learning—alongside integration into real-time control workflows and energy-preserving simulation architectures—are targeted future directions (Neary et al., 2024).
7. Representative Application: DC Power Network
An example system is a DC generator–line–load network, parametrized by inductance , capacitances , , resistances , , ; input , output . The pH-DAE form incorporates the algebraic current–voltage constraints induced by interconnection, and unknown physical parameters are identified via observed noisy measurements of under multisine excitation. The backward Euler–Newton scheme consistently reconstructs the dynamic and algebraic variables, achieves errors consistent with measurement noise, and reliably recovers true parameter values across noise levels.
In summary, backward Euler discretization combined with Newton's method establishes a principled, structure-preserving, and differentiable simulation scheme for port-Hamiltonian differential-algebraic neural network models, underpinning robust identification, simulation, and compositional learning for constrained physical systems (Hagelaars et al., 23 Jan 2026).