Complex-Valued Processing Overview
- Complex-valued processing is a framework that treats signals as complex numbers to capture both amplitude and phase, improving modeling in communications, imaging, and radar.
- It extends traditional mathematical operations using techniques like Wirtinger calculus, enabling advanced neural network optimization and adaptive filtering.
- Practical applications include complex Gaussian processes and CVNNs that deliver robust performance in noisy and interference-prone environments.
Complex-valued processing encompasses a set of theoretical, algorithmic, and hardware frameworks that treat signals, data, or parameters as elements of the complex field ℂ, rather than confining them to the real line. This paradigm is fundamental in applications such as communications, radar, MRI, seismic analysis, quantum engineering, and wave-based modeling, where the intrinsic structure of data—amplitude and phase—demands representations and operations in ℂ. Advancements in complex-valued signal processing, adaptive systems, deep learning, privacy, and optimization have produced robust methodologies that exploit this structure, yielding improved modeling fidelity, efficiency, and interpretability.
1. Mathematical Foundations of Complex-Valued Processing
Complex-valued processing constructs operate by treating every datum as a point in ℂ and by extending foundational operations—arithmetic, transforms, and kernel methods—into this algebraic field.
Complex Calculus and Differentiation
Complex-valued functions can be analytic (holomorphic) or non-analytic, with differentiability governed by the Cauchy–Riemann equations. In neural and kernel-based processing, non-holomorphic (split-type) nonlinearities are required for boundedness due to Liouville’s theorem. Differentiation in ℂ utilizes Wirtinger calculus:
Most complex-valued learning algorithms use Wirtinger derivatives for gradient-based optimization, including in backpropagation for neural networks and hyperparameter learning for Gaussian processes (Abdalla, 2023, Sarroff et al., 2015).
Properness and Covariance Structure
In stochastic modeling, proper complex random processes are circularly symmetric with vanishing pseudo-covariance, simplifying kernels and likelihoods (Boloix-Tortosa et al., 2015). Convolutional approaches to cross-covariances extend reproducible kernel Hilbert spaces to ℂ, ensuring positive-definiteness and capturing both magnitude and phase structure.
2. Core Algorithms and Architectures
The extension of classical models to ℂ requires careful adaptation of every layer, operation, and loss.
Complex Gaussian Processes
Complex-valued Gaussian process regression employs complex-valued kernels and noise models to fit, predict, and denoise signals directly in ℂ. For proper processes, the kernel structure ensures circular symmetry and can be constructed by filtering independent real driving noises with suitably parameterized response functions. Parameter optimization is performed via Wirtinger-gradient ascent on the marginal likelihood (Boloix-Tortosa et al., 2015). Extensions model the real signal as the real part of a latent complex process, with specialized quadrature and quasi-quadrature kernels that generalize the Hilbert transform, enabling estimation of instantaneous amplitude and frequency with Bayesian uncertainty quantification (Ambrogioni et al., 2016).
Adaptive Filtering in ℂ
Adaptive system identification frameworks integrate complex tensors and least-mean-square (LMS) learning. The canonical polyadic decomposition (CPD) architecture is extended to ℂ, with three principal forms: trivial “split-real” adaptation (processing real and imaginary parts independently), “two-tensor” models with complex LMS post-processing, and fully complex tensor models updated via Wirtinger calculus. The last achieves optimal interference cancellation and nonlinear modeling in communications at increased but tractable computational cost (Ploder et al., 2023).
Deep and Hybrid Neural Architectures
Complex-valued neural networks (CVNNs) replace each affine or convolutional operation with its complex analogue; all weights, biases, and activations lie in ℂ. Activation functions fall into three families: component-wise splits (e.g. CReLU), modulus–phase (e.g. modReLU, cardioid), and phase-quantized (e.g. MVN/phazor neurons) (Abdalla, 2023, Agrawal, 10 Oct 2025). Backpropagation proceeds via Wirtinger calculus.
Hybrid real–complex architectures feature dual processing paths and include domain-conversion operators (e.g., from Cartesian/polar real inputs to ℂ and vice versa). This allows the model to combine real-valued efficiency with complex-valued modeling, dynamically adapting to the signal domain and context (Young et al., 4 Apr 2025).
Manifold-based approaches, such as SurReal and Steinmetz networks, treat ℂ∖{0} as a Riemannian product manifold (scaling × rotation), with convolution and nonlinearity defined intrinsically, ensuring equivariance and more compact, interpretable representations (Chakraborty et al., 2019, Venkatasubramanian et al., 2024).
Restricted Boltzmann Machines and Embedding Methods
A complex-valued RBM—CRBM—extends energy-based learning to model, reconstruct, and parameterize complex-valued spectra, directly encoding and regenerating phase information with better QA-metrics on speech tasks versus purely real-valued baselines (Nakashika et al., 2018).
Complex spectral embeddings allow indefinite and non-metric proximity matrices to be mapped to ℂd, maintaining the original information and enabling complex-valued prototype-based classification and learning, which is particularly advantageous for structure-rich distance data (Münch et al., 2020).
3. Computational Considerations and Implementation
Complex-valued processing incurs a higher arithmetic and resource cost per parameter than real-valued analogues, but the asymptotic scaling is preserved.
Arithmetic Cost
Each complex multiplication requires four real multiplies and two adds; additions require two adds (Mayer et al., 2023). Feedforward and backpropagation in CVNNs scale as O(L N²) for L layers of width N, with a constant factor 5–8× that of RVNNs due to complex arithmetic. Assembly-level optimizations (e.g., using the Gauss trick for multiplication) can reduce computation (Smith, 2023).
Hardware and Algorithmic Optimization
Fast Fourier transform (FFT) algorithms can be re-cast to keep real and imaginary channels independent throughout the pipeline, mapping well to hardware that lacks a native complex multiply and supporting downstream dual-channel processing, without increasing arithmetic complexity (Cariow, 9 Apr 2025).
Reduced-parameter models are feasible due to the highly structured nature of complex arithmetic; hybrid and geometric models often achieve higher accuracy–parameter ratios than purely real-valued DNNs (Chakraborty et al., 2019, Young et al., 4 Apr 2025, Venkatasubramanian et al., 2024).
4. Signal Processing, Learning, and Privacy Applications
Complex-valued processing underpins a broad spectrum of data-driven and physics-inspired models for real-world signals.
Signal Modeling and Analysis
In time-frequency analysis and time series modeling, complex-valued Gaussian processes outperform Hilbert and wavelet methods in estimating instantaneous amplitude and frequency, especially in low SNR or non-sinusoidal regimes (Ambrogioni et al., 2016).
In imaging, M-mode-like OCT angiography separates vessel and static tissue based on lateral amplitude/phase decorrelation in dense complex B-scans, achieving motion robustness and precise vessel quantification (Matveev et al., 2014).
Communication and Sensing
Complex-valued architectures are central to channel identification, MIMO channel estimation, interference cancellation, and robust PolSAR segmentation. In high-noise or multi-path environments, end-to-end CVNNs demonstrate stronger performance and noise resilience compared to purely real-valued networks (Ploder et al., 2023, Smith et al., 2023, Barrachina et al., 2022).
Privacy-Preserving Learning
Complex-valued federated learning integrates the circular complex Gaussian mechanism—ensuring f-DP, (ε,δ)-DP, and Rényi-DP privacy guarantees—with complex-valued neural architectures, applying conjugate (Wirtinger) gradients and DP-compatible normalization and activation layers. MRI pulse sequence classification experiments validate <2% utility loss at ε=3 for strong privacy, a feasible regime for privacy-critical medical AI (Riess et al., 2021).
Optical Information Processing
Diffractive deep neural networks (D2NN) implement arbitrary complex-valued linear transformations using spatially incoherent light via “mosaicing” complex bases, enabling thin, all-optical, high-throughput processors for encryption/decryption and analog signal processing. The device's parameter threshold scales with the space–bandwidth product of input and output, and phase-only wavefront coding allows physically robust design (Yang et al., 2023).
5. Practical Implementation: Frameworks, Initialization, and Normalization
With emergence of deep learning in ℂ, specialized frameworks and modules have been developed.
Framework Support and Modules
PyTorch and TensorFlow support direct complex operations and Wirtinger derivatives, with community packages providing fully complex linear, convolutional, batch and layer normalization, manifold layers, attention, dropout, and loss functions (Smith, 2023, Abdalla, 2023).
Initialization and Normalization
Stable initialization in ℂ uses either polar forms (Rayleigh modulus, uniform phase) to maintain variance, or independent real-imaginary draws. Complex batch normalization whitens real-imag pairs via the 2 × 2 covariance, ensuring circular symmetry and preserving amplitude–phase coupling (Abdalla, 2023, Agrawal, 10 Oct 2025).
Activation Functions
Choice of activation is critical for phase preservation and training stability. Cardioid and CReLU exhibit superior generalization and convergence properties in both signal and audio domains; modReLU and zReLU offer phase-by-quadrant thresholding for more expressive power, but can be susceptible to collapse or instability if not carefully tuned (Agrawal, 10 Oct 2025).
6. Theoretical and Empirical Insights
Complex-valued processing provides not just representational fidelity, but also theoretical and empirical advantages:
- The phase magnitude structure of ℂ enables built-in equivariance to phase shifts and amplitude scalings, unattainable in RVNNs (Chakraborty et al., 2019, Venkatasubramanian et al., 2024).
- Wavelet and windowed-spectrum transforms are naturally realized via complex-valued convolutions and modulus operations, yielding provable stability and universal expressiveness (Bruna et al., 2015).
- Hybrid and geometric models achieve state-of-the-art performance on RF, MRI, PolSAR, and audio tasks, with far fewer parameters and improved generalization in low-SNR or limited-data settings (Young et al., 4 Apr 2025, Chakraborty et al., 2019, Venkatasubramanian et al., 2024).
- Privacy guarantees are attainable for sensitive complex-domain data (e.g., MRI) using rigorously generalized mechanisms and per-sample gradient clipping in complex space (Riess et al., 2021).
Complex-valued processing, with ongoing advances in architectures, optimization, and applications, continues to be central to the next generation of signal and data understanding in science and engineering.