Best free knot linear spline approximation and its application to neural networks (2404.00008v1)
Abstract: The problem of fixed knot approximation is convex and there are several efficient approaches to solve this problem, yet, when the knots joining the affine parts are also variable, finding conditions for a best Chebyshev approximation remains an open problem. It was noticed before that piecewise linear approximation with free knots is equivalent to neural network approximation with piecewise linear activation functions (for example ReLU). In this paper, we demonstrate that in the case of one internal free knot, the problem of linear spline approximation can be reformulated as a mixed-integer linear programming problem and solved efficiently using, for example, a branch and bound type method. We also present a new sufficient optimality condition for a one free knot piecewise linear approximation. The results of numerical experiments are provided.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.