Identify weaker-than-continuity conditions ensuring non-universality for deep CVNNs

Determine natural assumptions on the activation function σ: C → C, strictly weaker than continuity, under which the necessary non-universality implication in the deep-network result holds; specifically, prove that if σ satisfies those assumptions and either (i) σ coincides almost everywhere with a polynomial p(z, z̄) ∈ C[X,Y] or (ii) σ coincides almost everywhere with an entire holomorphic function g or its complex conjugate, then for all input dimensions d ∈ N and depths L ∈ N the class NN_{σ,L}^d of complex-valued feedforward neural networks fails to have the universal approximation property.

Background

The paper establishes a complete characterization of activation functions σ: C → C that yield the universal approximation property for complex-valued neural networks. For deep networks (L ≥ 2), universality holds provided σ is neither a polynomial in (z, z̄) nor holomorphic/antiholomorphic (almost everywhere), while the necessary direction is proven under the additional assumption that σ is continuous.

A remark after the deep-network theorem explains that continuity is used only in the necessary direction and is not a mere proof artifact: a discontinuous σ ∈ M can coincide almost everywhere with a polynomial p(z, z̄) yet still produce universal deep networks (Example 5.11). This motivates the open problem of identifying weaker, more natural regularity conditions (strictly weaker than continuity) that would still guarantee the non-universality conclusion in the necessary direction.

References

We leave it as future work to determine natural conditions on σ that are weaker than continuity, but under which a necessary condition as in the above theorem still holds.

The universal approximation theorem for complex-valued neural networks (2012.03351 - Voigtlaender, 2020) in Remark following Theorem (DeepUniversalApproximationIntroduction), Subsection 1.1 "Our results in a nutshell"