Papers
Topics
Authors
Recent
Search
2000 character limit reached

Backward Euler & Newton's Method for DAEs

Updated 30 January 2026
  • Backward Euler Discretization with Newton's Method is a numerical approach that robustly simulates stiff differential-algebraic systems by enforcing physical constraints.
  • It employs an implicit time-stepping scheme combined with an iterative Newton solver to ensure numerical stability and preserve energy-consistent invariants.
  • The method supports data-driven system identification in port-Hamiltonian neural networks, reliably recovering physical parameters even in noisy environments.

Backward Euler Discretization with Newton's Method is a numerical approach for simulating constrained dynamical systems formulated as differential-algebraic equations (DAEs), frequently appearing in physics-informed machine learning and port-Hamiltonian neural networks (pHNNs). This method is particularly suited for stiff and algebraically coupled models, such as those arising in electrical networks, mechanical systems, and multi-component structures. Recent developments incorporate structural guarantees from the port-Hamiltonian framework, enabling physically-consistent, data-driven identification and robust long-horizon simulation (Hagelaars et al., 23 Jan 2026).

1. Differential-Algebraic Port-Hamiltonian Systems

Physically-realistic network models—e.g., electrical circuits constrained by Kirchhoff’s laws—are naturally described by DAEs,

Ex˙(t)=(J−R)Qx(t)+Gu(t),y(t)=G⊤Qx(t),E\dot{x}(t) = (J - R)Q x(t) + G u(t), \quad y(t) = G^\top Q x(t),

where E,QE, Q encode the energy metric and possible singularity (i.e., algebraic constraints), J=−J⊤J = -J^\top is the interconnection matrix (energy-conserving), R=R⊤⪰0R = R^\top \succeq 0 captures dissipation, and GG maps external inputs to energy ports. Algebraic constraints are induced by EE being rank-deficient, enforcing, for example, instantaneous relations among voltages and currents.

The port-Hamiltonian structure ensures that energy balance (dissipation inequality), compositionality, and passivity are preserved by construction for all parameter choices. Parameterization via neural networks—with structured symmetries and positive definiteness—enables robust data-driven identification of unknown or nonlinear subsystems (Neary et al., 2024, Hagelaars et al., 23 Jan 2026).

2. Backward Euler Discretization of DAEs

Simulating DAEs requires careful discretization to maintain numerical stability and consistency with algebraic constraints. Backward Euler is an implicit time-stepping scheme formulated as

Eθ xn−xn−1h=(Jθ−Rθ) Qθ xn+Gθ un,E_\theta\, \frac{x_{n} - x_{n-1}}{h} = (J_\theta - R_\theta)\,Q_\theta\,x_{n} + G_\theta\,u_n,

where hh is the time-step. Unlike explicit methods, backward Euler robustly handles stiffness and the fast transients associated with singular EE, naturally stabilizing the integration and honoring the instantaneous algebraic relations. The implicit nonlinear equation for xnx_{n} cannot be solved directly in closed form when Eθ,Jθ,Rθ,GθE_\theta, J_\theta, R_\theta, G_\theta are state-dependent and/or neural network outputs.

In modern implementations (see (Hagelaars et al., 23 Jan 2026)), each neural parameter (or matrix factor) is chosen via unconstrained matrices/factors, enforcing network constraints such as skew-symmetry, positive-semi-definiteness, and block-sparsity topologically.

3. Newton's Method for Nonlinear Implicit Equations

Newton's method is applied at each time-step to solve the implicit nonlinear system defined by backward Euler discretization: r(xn):=(Eθ/h)xn−(Eθ/h)xn−1−(Jθ−Rθ)Qθxn−Gθun=0.r(x_n) := (E_\theta/h)x_n - (E_\theta/h)x_{n-1} - (J_\theta - R_\theta)Q_\theta x_n - G_\theta u_n = 0. Iteratively, the update is

Jr(xi)Δxi=−r(xi),xi+1=xi+Δxi,J_r(x_i) \Delta x_i = -r(x_i), \qquad x_{i+1} = x_i + \Delta x_i,

where Jr(x)J_r(x) is the Jacobian of the residual with respect to xnx_n. Convergence is typically declared when ∥Δxi∥<ϵ\|\Delta x_i\| < \epsilon for a chosen tolerance. All residual evaluations, Jacobian calculations, and matrix operations are performed in a differentiable (autograd-enabled) manner to support gradient-based optimization of neural parameters.

This iterative nonlinear solver accommodates highly parameterized, state-dependent models, making the approach scalable to large systems with mixed algebraic and differential states. The algorithm is robust to noise in input-output data and does not require explicit knowledge of internal states (Hagelaars et al., 23 Jan 2026).

4. Data-Driven Identification and Loss Functions

In the context of port-Hamiltonian neural networks for system identification, only noisy output y and input u measurements are assumed available. An encoder network (e.g., SUBNET) is used to reconstruct the initial state for each simulated trajectory segment, typically by embedding a short history of observed outputs and controls.

Given a mini-batch of input/output sequences, the loss is an output-error simulation criterion: V(θ,η)=1N∑k=1N−1∥yk−y^k∥22,V(\theta, \eta) = \frac{1}{N} \sum_{k=1}^{N-1} \| y_k - \widehat{y}_k \|_2^2, where y^k\widehat{y}_k is the simulated output from the DAE system integrated via backward Euler and Newton's method. Optimization jointly adjusts both the pHNN parameters θ\theta and encoder parameters η\eta, relying on the differentiable structure of the DAE solver (including Newton steps).

The approach exhibits noise-robustness, as shown in electrical network benchmarks: normalized RMS error (NRMS) is proportional to the noise amplitude regardless of SNR, and physical parameters recovered via the learned EθE_\theta, RθR_\theta factors match ground truth to within a small percentage error (Hagelaars et al., 23 Jan 2026).

5. Structural Guarantees and Practical Benefits

The method enforces structural invariants of port-Hamiltonian systems via specific matrix factorizations:

  • JθJ_\theta via skew-symmetrization: Jθ=12(MJ−MJ⊤)J_\theta = \frac{1}{2}(M_J - M_J^\top),
  • RθR_\theta, EθE_\theta via Gram–Cholesky factorization: Rθ=LRLR⊤R_\theta = L_R L_R^\top, Eθ=LELE⊤E_\theta = L_E L_E^\top,
  • QθQ_\theta typically set to identity for identification.

These guarantee that the learned model remains passive, energy-consistent, and physically interpretable throughout training. The compositional property allows modular construction and assembly of larger systems from individually trained subsystems, facilitating scalable modeling in domains such as microgrid simulation, circuit design, and composite multiphysics systems (Neary et al., 2024).

The backward Euler–Newton scheme bypasses the numerical instability and constraint violation issues common in explicit or unconstrained ODE/NN approaches. All steps remain amenable to automatic differentiation for applied machine learning, including gradient-based trajectory loss minimization.

6. Limitations and Research Directions

While highly effective for index-one DAEs (i.e., systems where the constraint Jacobian ∇xaΦ\nabla_{x_a}\Phi is invertible), backward Euler with Newton's method is not directly applicable to higher-index systems without further regularization or index-reduction technique. Computational cost scales with the size and complexity of the DAE, and the repeated evaluation of neural networks and their Jacobians may require custom implementation for very large-scale systems.

Current implementations focus on quadratic Hamiltonians for efficiency; generalization to nonlinear, state-dependent Hθ(x)H_\theta(x) would increase expressivity but potentially exacerbate identifiability and numerical challenges. Efficient solvers, parallel Newton-step computation, and specialized differentiable matrix algebra routines are active areas of development.

Extending backward Euler–Newton capabilities to stochastic pHNNs, uncertainty quantification, and fully modular component-wise learning—alongside integration into real-time control workflows and energy-preserving simulation architectures—are targeted future directions (Neary et al., 2024).

7. Representative Application: DC Power Network

An example system is a DC generator–line–load network, parametrized by inductance LL, capacitances C1C_1, C2C_2, resistances RLR_L, RGR_G, RRR_R; input EGE_G, output IGI_G. The pH-DAE form incorporates the algebraic current–voltage constraints induced by interconnection, and unknown physical parameters are identified via observed noisy measurements of IGI_G under multisine excitation. The backward Euler–Newton scheme consistently reconstructs the dynamic and algebraic variables, achieves errors consistent with measurement noise, and reliably recovers true parameter values across noise levels.

In summary, backward Euler discretization combined with Newton's method establishes a principled, structure-preserving, and differentiable simulation scheme for port-Hamiltonian differential-algebraic neural network models, underpinning robust identification, simulation, and compositional learning for constrained physical systems (Hagelaars et al., 23 Jan 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Backward Euler Discretization with Newton's Method.