Papers
Topics
Authors
Recent
Search
2000 character limit reached

Holomorphic Phasor Activation

Updated 1 February 2026
  • Holomorphic phasor activation is a complex-analytic function that modulates its input by inverting magnitude and reversing or extracting phase.
  • It enhances neural network performance through stable gradients, efficient signal modulation, and improved data extrapolation capabilities.
  • Architectures like CauchyNet and Blaschke networks leverage these activations for rapid convergence, compact representation, and robust generalization.

Holomorphic phasor activation refers to analytic (holomorphic) activation functions in neural networks where the activation depends on the phase (argument) or inversion of complex inputs. This concept is grounded in complex analysis, Hardy space theory, and rational kernel methods. Architectures employing holomorphic phasor activations—including CauchyNet and deep phasor networks—exploit unique properties of analytic maps (such as phase inversion, amplitude modulation, and stability under composition) to outperform standard real-valued or non-holomorphic activations in signal representation, data efficiency, and learnability.

1. Holomorphic Phasor Activations: Definitions and Mathematical Foundations

A holomorphic phasor activation is an elementwise transformation ϕ:CC\phi:\mathbb{C} \rightarrow \mathbb{C} or CR\mathbb{C} \rightarrow \mathbb{R}, which is holomorphic (complex-analytic) on its domain. Two canonical constructions appear in recent literature:

  • CauchyNet inversion activation: ϕ(z)=z1\phi(z) = z^{-1} (optionally shifted as (z+ϵ)1(z+\epsilon)^{-1} to avoid the singularity at z=0z=0). In polar form, for z=ρeiθz = \rho\,e^{i\theta},

ϕ(z)=ρ1eiθ\phi(z) = \rho^{-1}\,e^{-i\theta}

This maps the input's magnitude to its reciprocal and reverses its phase.

  • Phase-of-Blaschke activation: For Blaschke factor Ba(z)B_a(z),

ϕa(z)=Im(logBa(z))=argBa(z)\phi_a(z) = \mathrm{Im}(\log B_a(z)) = \arg B_a(z)

In closed form,

ϕa(z)=arctan((1a2)Imz1az2(1a2)Rez)\phi_a(z) = \arctan\left( \frac{(1 - |a|^2)\,\mathrm{Im}\,z}{|1 - \overline{a}\,z|^2 - (1 - |a|^2)\,\mathrm{Re}\,z} \right)

This is holomorphic on D\mathbb{D} except at z=az=a.

Both constructions satisfy the Cauchy–Riemann conditions or are locally the imaginary part of a holomorphic function (on their domain, excluding essential singularities) (Zhang et al., 11 Oct 2025, Coifman et al., 2021).

2. Analytic Properties and Holomorphicity

The defining property of holomorphic phasor activations is complex analyticity. For the inversion activation, f(z)=z1f(z)=z^{-1},

  • It is analytic on C{0}\mathbb{C} \setminus \{0\}, with Wirtinger derivative dϕ/dz=1/z2d\phi/dz = -1/z^2.
  • It admits power series expansion for z>0|z|>0 and satisfies /zz1=0\partial/\partial\overline{z}\,z^{-1} = 0.

For the phase-of-Blaschke activation,

  • zϕa(z)=argBa(z)z \mapsto \phi_a(z) = \arg B_a(z) is holomorphic in zz (except at the zero z=az=a due to the branch of log\log).
  • The output range is always bounded (π,π)(−\pi, \pi) or (π/2,π/2)(−\pi/2, \pi/2) for typical arctan parametrizations.
  • Composition of such activations preserves inner-function character, yielding further analytic (holomorphic) transforms (Coifman et al., 2021).

Deep phasor networks realize holomorphicity by restricting inputs to the unit circle and ensuring preactivations do not approach zero, so that derivatives and backpropagation remain well-defined (Olin-Ammentorp et al., 2021).

3. Signal Propagation, Modulation, and Representation

Holomorphic phasor activations induce characteristic effects on signal propagation:

  • Amplitude inversion (CauchyNet): Input magnitude ρ\rho is inverted. Small-magnitude inputs yield large-magnitude outputs, concentrating representation capacity near singular or critical inputs.
  • Phase reversal/injection: For ϕ(z)=z1\phi(z)=z^{-1}, the output phase is θ-\theta. Successive layers accumulate phase rotations, modeling oscillatory or directional dependencies.
  • Phase extraction (Blaschke/phasor networks): Extracts the phase of the weighted sum, encoding instantaneous frequency and phase information.

These mechanisms yield architectures that efficiently capture rational spikes, singularities, and dynamic structures—significantly improving fitting and interpolation on tasks characterized by sharp transitions or frequency jumps (Zhang et al., 11 Oct 2025, Coifman et al., 2021, Olin-Ammentorp et al., 2021).

4. Neural Network Architectures and Composition

Table: Holomorphic Phasor Neural Modules

Activation Formula Network Role
Inversion (CauchyNet) ϕ(z)=z1\phi(z)=z^{-1} Hidden/unitwise
Blaschke phase ϕa(z)=argBa(z)\phi_a(z)=\arg B_a(z), arctan ratio Layer/interpreter
Phasor/Arg extraction θout=Arg(z)\theta_{\text{out}} = \mathrm{Arg}(z) Output/temporal code

In multi-layer settings:

  • CauchyNet: Each hidden layer applies elementwise inversion, possibly with offset ϵ\epsilon. Output depends holomorphically on all intermediate variables.
  • Blaschke networks: Each layer applies a Möbius transform (Blaschke), followed by phase extraction (arctan activation). Output is the sum of layerwise phase extractions.
  • Deep phasor networks: Inputs are unit phasors; layers perform complex-weighted summation followed by argument extraction (phase), maintaining holomorphicity throughout the stack (Olin-Ammentorp et al., 2021).

5. Learning Representation, Backpropagation, and Stability

Holomorphic phasor activations exhibit advantageous learning characteristics:

  • Stable gradients: No "dead" regions or saturations (in contrast to ReLU or tanh); inversion and phase extraction yield non-vanishing, holomorphic gradients.
  • Closed-form derivatives: For Arg-based activations, Arg(z)/z=1/(2iz)\partial\,\mathrm{Arg}(z)/\partial z = 1/(2i\,z), enabling chain-rule analytic backpropagation (Olin-Ammentorp et al., 2021).
  • Universal approximation: Rational activation z1z^{-1} is universal-approximation capable, modeling spikes and singularities with fewer parameters than real-valued saturating or piecewise-linear activations (Zhang et al., 11 Oct 2025, Coifman et al., 2021).
  • Numerical stability: Hilbert transform boundedness and inner-function composition ensure robustness under repeated composition (deep architectures).

6. Empirical Performance and Applications

Experimental evaluations demonstrate:

  • Predictive accuracy: CauchyNet architecture achieves up to 50% lower mean absolute error (MAE) on test sets compared to ReLU-MLPs and SIREN models, with compact parameterization (e.g., one hidden layer of 128 units) (Zhang et al., 11 Oct 2025).
  • Data extrapolation: Holomorphic inversion and phase-extractive networks extrapolate accurately into missing/gapped regions, where conventional activations fail to track underlying structures.
  • Fast convergence: Rapid decrease in loss during training, with fewer parameters and greater data efficiency.
  • Domain generality: Successfully applied to time series forecasting, missing data imputation, and modeling in transportation, energy, and epidemiological datasets.

A plausible implication is that holomorphic phasor activations are particularly well-suited for resource-constrained and data-scarce environments, where compact models with robust generalization are desired.

7. Holomorphic Phasor Activations in Temporal and Neuromorphic Computation

Deep phasor networks encode activations as timed spikes in temporal domains:

  • Atemporal execution: Standard floating-point tensor evaluation (complex linear combinations and Arg extraction).
  • Temporal/spiking execution: Each activation encodes phase as spike timing in an oscillatory cycle; the network dynamics (resonate-and-fire neuron) implement the same holomorphic computations via event-driven updates (Olin-Ammentorp et al., 2021).
  • Equivalence: The mathematical machinery is identical for atemporal and spiking modes; no conversion is needed between them. This enables the direct deployment of trained networks on neuromorphic hardware.

This suggests holomorphic phasor networks possess unique advantages for mixed-mode computation, bridging classical tensor models and spiking/event-driven architectures.


In summary, holomorphic phasor activations encompass analytic transformations capturing phase or inversion of complex signals, with proven data-efficiency, stable learning, and interpretability. Their connection to Hardy spaces, Blaschke products, and rational kernel theory equips deep models to represent intricate singularities and temporal/spatial dependencies, with empirically validated superiority in generalization, sparsity, and multi-domain adaptability (Zhang et al., 11 Oct 2025, Coifman et al., 2021, Olin-Ammentorp et al., 2021).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Holomorphic Phasor Activation.