Generalization of free-boundary deep learning to higher dimensions

Determine whether the deep learning approach that jointly parameterizes the value function and the free boundary and incorporates free-boundary conditions into the loss function (Wang and Perdikaris, 2021) extends to higher-dimensional variational inequalities (dimensions d ≥ 3). Specifically, ascertain conditions under which this free-boundary parameterization remains valid and scalable when the free boundary’s dimension increases and the network architecture must adapt to higher-dimensional domains.

Background

The paper reviews three deep learning approaches to optimal stopping: approximating the solution of variational inequalities, parameterizing the optimal stopping time, and approximating the free boundary directly. Wang and Perdikaris (2021) exemplify the third approach by jointly parameterizing the value function and the free boundary and enforcing boundary conditions within the loss function, demonstrating practical effectiveness on two-dimensional Stefan problems.

The authors explicitly note gaps regarding this line of work: a lack of convergence analysis and uncertainty about scalability to higher dimensions, where the free boundary becomes higher-dimensional and potentially requires architectural changes. This raises a concrete unresolved question about extending the method to d ≥ 3 and establishing conditions under which it remains sound.

References

However, there is no convergence analysis, and it is not clear if it can be generalized for higher dimensions as the dimension of the free boundary would also increase and the framework of the network needs to change accordingly.

Neural Network Convergence for Variational Inequalities (2509.26535 - Zhao et al., 30 Sep 2025) in Section 1, Introduction