Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neuromorphic Wireless Split Computing

Updated 21 January 2026
  • Neuromorphic wireless split computing is an integrated system that partitions spiking neural network processing between energy-efficient edge sensors and high-capacity cloud or edge servers.
  • The approach leverages co-designed SNN encoding and wireless interfacing—using methods such as impulse-radio, OFDM, and CDMA—to transmit sparse, event-driven spike signals over noisy channels.
  • Advanced training techniques, including federated learning and joint source-channel coding, optimize energy, latency, and accuracy for real-world applications across biomedical, robotics, and IoT domains.

Neuromorphic wireless split computing is an integrated device-edge paradigm that partitions spiking neural network (SNN) inference or learning pipelines between an event-driven neuromorphic front end (sensor plus lightweight encoding SNN) at the edge and a higher-capacity SNN or artificial neural network (ANN) at an edge/cloud server, using energy-efficient wireless transmission of sparse spike-coded signals as the intermediate representation. Distinct from classical full-stack or frame-based machine learning deployments, neuromorphic split computing exploits event-driven sensing, temporal and spatial sparsity, and the robust statistical properties of spikes to deliver ultra-low-power, low-latency remote inference and federated learning under stringent bandwidth and energy constraints. SNN encoding and wireless transmission are jointly learned or co-designed to address the challenges of noisy, bandwidth-limited links and synchrony between edge and server, with system architectures ranging from impulse-radio over single-user links to dense code-division multiaccess and orthogonal frequency division multiplexing (OFDM).

1. Core System Architectures and Principles

The canonical neuromorphic wireless split computing system is structured as a five-block pipeline:

  1. Event-based Neuromorphic Sensor: Devices such as dynamic vision sensors (DVS), event-driven microphones, or neural implants generate high-dimensional sparse spike streams oT\mathbf{o}_{\leq T} with dod_o channels over TT time-steps (Skatchkovsky et al., 2020). Event sparsity (e.g., 100×\times data reduction over conventional frame sensors) directly lowers downstream power and communication load.
  2. Edge SNN Encoder: A lightweight, often single-layer SNN (θE\theta^E) encodes sensory spikes into dxd_x-dimensional sparse spike trains xT\mathbf{x}_{\leq T}:

pθE(xToT)=t=1Tp(xtx<t,ot)p_{\theta^E}(\mathbf{x}_{\leq T}\,\|\,\mathbf{o}_{\leq T})=\prod_{t=1}^T p(\mathbf{x}_t|\mathbf{x}_{<t},\mathbf{o}_{\le t})

Designs may use GLM-style stochastic spiking (Skatchkovsky et al., 2020) or deterministic LIF/SRM neurons, with minimal or no hidden layers to minimize local compute (Skatchkovsky et al., 2020).

  1. Wireless Interface: Impulse-radio (IR) modulation, OFDM (analog/digital), or asynchronous code-division multiplexing (CDMA) map sparse spike events or graded spike payloads to ultra-short RF signals. Each spike may correspond to a Gaussian monopulse (IR) or PAM/QPSK symbol (OFDM) (Wu et al., 2024, Wu et al., 24 Jun 2025).
  2. Wireless Channel: Links are modeled as memory-one AWGN or Rayleigh multipath channels, binary symmetric channels (BSC), or, for multi-user, as shared asynchronous CDMA (Lee et al., 2023). Reliability trade-offs depend on physical SNR, spike sparsity, and multiaccess interference.
  3. Edge/Cloud SNN/ANN Decoder: The received, possibly corrupted, spike stream yT\mathbf{y}_{\leq T} is processed by a fully connected SNN decoder (θD\theta^D) or, in hybrid systems, by an ANN for final inference:

pθD(vTyT)=t=1Tp(vtv<t,yt)p_{\theta^D}(\mathbf{v}_{\leq T}\,\|\,\mathbf{y}_{\leq T}) = \prod_{t=1}^T p(\mathbf{v}_t|\mathbf{v}_{<t},\mathbf{y}_{\le t})

Adaptive remote decoders may be dynamically reconfigured or retargeted to channel conditions via hypernetworks or pilot signals (Chen et al., 2022, Chen et al., 2024).

The core design principle is to split a complex semantic inference objective so that the front-end SNN extracts early, often spatially redundant or locally predictable features, while task-specific context aggregation and complex discrimination remain server-side. The SNN encoder outputs sparse intermediate codes directly suitable for event-driven wireless transport, jointly learning source and channel representations (Skatchkovsky et al., 2020, Skatchkovsky et al., 2020).

2. Wireless Encoding, Modulation, and Channel Models

Several transmission schemes are compatible with neuromorphic split systems, tailored to maximize spectral and energy efficiency given the spiking nature of signals:

  • Impulse-Radio (IR) OOK: Each binary spike xk,t{0,1}x_{k,t}\in\{0,1\} triggers a single IR monocycle pulse with fixed energy EpulseE_{\rm pulse}. The receiver demodulates via thresholding:

yk,t=1{xk,tEpulse+nk,t>0.5}y_{k,t} = \mathbf{1}\{x_{k,t}E_{\rm pulse} + n_{k,t} > 0.5\}

with nk,tN(0,σ2)n_{k,t}\sim\mathcal{N}(0,\sigma^2). Sub-nanosecond pulses minimize per-event latency (on the order of 1 ns) and energy (tens of pJ) (Skatchkovsky et al., 2020).

  • Graded (Multi-Level) Spiking Modulation (M-LIF): To carry more information per transmitted event, multi-level spike coding is implemented using graded SNNs; each spike carries an mm-bit payload mapping to a quantized membrane voltage (Wu et al., 2024), e.g.,

Q(V)=min(αV2m,2m)Q(V) = \min(\lfloor \alpha V 2^{m}\rfloor, 2^{m})

Payload transmission uses analog PAM or digital QPSK/LDPC OFDM; higher payload increases both inference accuracy and required bandwidth.

  • OFDM Mapping & Analog Transmission: For large spike vector St{0,1}MS_t\in\{0,1\}^M, each element is mapped to an OFDM subcarrier. Per-symbol transmission energy is PTsymP\,T_{\rm sym}, and the link supports Rayleigh multipath and pilot-aided channel estimation (Wu et al., 24 Jun 2025).
  • Asynchronous CDMA (ASBIT): Event-driven autonomous microsensors transmit unique, clock-agnostic BPSK-spread Gold-code bursts upon event detection. This enables scalable multiuser operation—demonstrated with up to 2,500 nodes at <103<10^{-3} event error rate in 10 MHz bandwidth—by exploiting event sparsity and code quasi-orthogonality (Lee et al., 2023).

Per-symbol SNR, compression rate r=dx/dor=d_x/d_o, payload size mm, and channel coding schemes (when used) are central design variables. Robustness is further enhanced by joint source-channel coding (i.e., end-to-end-trained SNN encoder/decoder pairs, or “NeuroJSCC” (Skatchkovsky et al., 2020, Skatchkovsky et al., 2020)).

3. Split Learning and Federated Training of SNNs

Training for split SNN architectures proceeds via local, federated, or end-to-end (JSCC) objectives:

  • Split Joint Source-Channel Autoencoding: Encoder and decoder SNNs are trained to minimize a loss that reflects both inference accuracy and channel robustness. Given a stochastic channel, the training objective is

minθ logpθ(vToT)\min_\theta\ -\log\,p_\theta(\mathbf{v}_{\leq T}\,\|\mathbf{o}_{\leq T})

with gradient estimated using the REINFORCE (score-function) approach (for GLM SNNs) and Monte Carlo sampling (Skatchkovsky et al., 2020).

  • Directed Information Bottleneck: The wireless SNN encoder is optimized to minimize a trade-off

LDIB(ϕ)=I(received spikesY)+βI(inputreceived spikes)L_{\rm DIB}(\phi) = -I(\mathrm{received\ spikes}\to Y) + \beta\,I(\mathrm{input}\to \mathrm{received\ spikes})

where β\beta controls the balance between communication cost and semantic relevance (Ke et al., 2024).

  • Federated SNN Learning (FL-SNN): Multiple edge devices independently train local SNN models and periodically upload real-valued weights for global aggregation via

θ1dD(d)dD(d)θ(d)\theta \gets \frac{1}{\sum_d|\mathcal{D}^{(d)}|} \sum_{d}|\mathcal{D}^{(d)}|\,\theta^{(d)}

Communication cost (each upload/download) is dim(θ)\dim(\theta) real values; the upload interval ΔJ\Delta J trades convergence against radio usage (Skatchkovsky et al., 2020).

Designing training to accommodate adversarial, noisy, or burst-error-prone channels, and supporting privacy via local data retention or secure aggregation, are open challenges (Skatchkovsky et al., 2020, Ke et al., 2024). Preliminary testbeds validate the practicality of true neuromorphic co-inference on hardware platforms with sub-mW power and sub-30 ms E2E latency (Ke et al., 2024).

4. Quantitative Performance and Comparative Metrics

Performance of neuromorphic wireless split computing architectures is evaluated on several axes:

Metric All-spike/NeuroJSCC IR (Skatchkovsky et al., 2020, Skatchkovsky et al., 2020) Multi-level SNN-OFDM (Wu et al., 2024) RF SNN-OFDM (Wu et al., 24 Jun 2025) ASBIT-CDMA (Lee et al., 2023)
Data reduction >>100×\times (DVS vs. frame) Monotonically with mm 2–10×\times (sparsity) %%%%32r=dx/dor=d_x/d_o33%%%% at 2,000+ sensors
Inference accuracy vs. SNR 90% at SNR =8=-8 dB, r=1r=1 (MNIST-DVS, T=80) Optimum m(SNR)m^*(\mathrm{SNR}) per channel >>93% at 5×\times–10×\times lower energy vs. LIF >>98.5% relative to wired for <103<10^{-3} SER
Energy per inference Sub-μ\muJ (at <<1 ns per spike) Not explicitly quantified; analog OFDM lower at small mm/low SNR %%%%45dxd_x46%%%%J (BRF, SHD); %%%%47TT48%%%%J (ITS) Single-chip node %%%%49mm50%%%%W (battery-free)
Latency << few μ\mus Sensing slot (e.g., 130 ms) Direct mapping per OFDM symbol <<12 ms per second of RF per node
Robustness (train-test SNR) <<5% degradation over [–10, 10] dB Analog more robust at low SNR End-to-end SNN tuning for channel %%%%55pθD(vTyT)=t=1Tp(vtv<t,yt)p_{\theta^D}(\mathbf{v}_{\leq T}\,\|\,\mathbf{y}_{\leq T}) = \prod_{t=1}^T p(\mathbf{v}_t|\mathbf{v}_{<t},\mathbf{y}_{\le t})56%%%% event error at SNR -16 dB
Scalability/multiuser Not addressed (single-user) N/A per system, open for extension N/A per system, open Thousands of nodes via CDMA and asynchronous spike codes

Key findings include:

  • End-to-end spike-based "JSCC" (e.g., NeuroJSCC) reliably outperforms frame-based SSCC approaches both for energy and time-to-accuracy, especially at low SNR (Skatchkovsky et al., 2020, Skatchkovsky et al., 2020).
  • Multi-level SNN spike architectures allow payload selection mm^* to saturate end-to-end inference accuracy for given link quality and bandwidth; analog repeaters outperform digital for small mm, low SNR (Wu et al., 2024).
  • Resonate-and-fire (RF, BRF) neurons encode spectral features natively, providing substantial further sparsity and energy reductions on complex signals (e.g., audio) (Wu et al., 24 Jun 2025).
  • Large-scale multiuser split computing is feasible with asynchronous event-driven CDMA, robust to clock drift and event collisions (Lee et al., 2023).

5. Extensions: Wake-Up Radios, Digital Twin Calibration, and Power Gating

Recent architectures address the non-negligible energy cost of keeping main radios powered even during event inactivity. Integration of a wake-up radio (WUR) enables always-on ultra-low-power sensing and event-driven firing of the main transmission path only upon detection of semantically relevant events (Chen et al., 2024):

  • Sensing stage: Q-CUSUM monitor triggers WUR transmitter once change-score exceeds threshold λs\lambda^{\rm s}.
  • WUR: Correlation-based OOK detection activates main receiver only when needed, reducing idle power.
  • Main IR transmission: Encodes and transmits buffered spikes after fixed delay.
  • Hypernetwork adaptation: Remote SNN decoder weights are reconfigured based on pilots after WUR triggering.

A digital twin–learn-then-test (DT-LTT) methodology provides a systematic approach to threshold selection for sensing, wake-up, and decision, ensuring controlled trade-offs between reliability (loss threshold α\alpha), latency, energy, and informativeness with theoretical guarantees (Chen et al., 2024).

Empirical results show >>50% energy savings relative to always-on IR architectures at equivalent reliability, with latency bounded by deterministic wake-up and decision times.

6. Design Trade-Offs, Open Challenges, and Research Directions

Several core trade-offs shape neuromorphic wireless split computing system design (Skatchkovsky et al., 2020, Wu et al., 2024, Chen et al., 2024):

  • Compression Rate (rr), Split Point, and Payload: Reducing dx/dod_x/d_o and/or spike rate directly lowers bandwidth and energy demand, but excessive compression or too sparse activation degrades inference accuracy. Multi-level payload increases information per spike but heightens link robustness requirements.
  • Energy-Latency-Accuracy Trichotomy: Lower on-device computation biasing the split toward the edge server increases communication energy and vice versa; joint optimization is needed.
  • Synchronization and Robustness: Sparse spike trains require shared timing or explicit preamble for correct edge/server alignment. Irregular/burst errors (fading, multipath, interference) remain an open robustness challenge, partially mitigated by end-to-end source-channel learning and hypernetwork adaptation (Chen et al., 2022).
  • Multiuser and Privacy: Code division (for multi-node systems), federated SNN updates, and privacy-preserving aggregation remain under active exploration.
  • Hardware Co-Design: Real-world deployments must match wireless PHY (impulse radio, OFDM, wake-up radio) and mixed-signal SNN hardware with joint algorithm–circuit optimization (Chen et al., 2024, Wu et al., 24 Jun 2025).

Active research directions include federated and collaborative inference over shared wireless links, adaptive modulation and coding tied to channel state, and integration with mature system-on-chip neuromorphic computing platforms.

7. Application Domains and Experimental Realizations

Published systems and prototypes demonstrate broad application:

  • Batteryless, event-driven wireless neural implants: Large-scale, ultra-low-power systems exploiting address-event representation, delta-modulation, and arbitration logic for high-channel-count neural data compression (Mohan et al., 2023).
  • Low-latency remote inference in robotics and IoT: Event-to-action pipelines with neuromorphic camera + SNN edge encoder + IR wireless + server-side ANN/SNN controller, achieving real-time robot actuation with sub-mW edge power and <<30 ms latency (Ke et al., 2024).
  • Brain-Machine Interface (BMI) decoding: Asynchronous split inference over 8,000+ event-driven CDMA channels, supporting state-of-the-art BMI decoding accuracy at negligible event error rates and ultra-low node power (Lee et al., 2023).
  • Joint audio and RF signal analysis: Resonate-and-fire SNN-OFDM splits supporting state-of-the-art classification at 5×\times–10×\times lower energy than LIF or ANN baselines (Wu et al., 24 Jun 2025).

The split computing paradigm for neuromorphic wireless systems enables the deployment of always-on, privacy-preserving, and highly energy- and spectrum-efficient inference and learning across a spectrum of embedded, biomedical, robotic, and distributed sensor applications. The technology stack is characterized by flexible allocation of processing (edge/server split), event-driven sparse coding, and robust wireless communication matched to the statistical structure and temporal sparsity of spiking activity.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Neuromorphic Wireless Split Computing.