CADC: Crossbar-Aware Dendritic Conv
- CADC is an in-memory computing method that integrates a biologically inspired ReLU nonlinearity into crossbar operations to enhance partial sum sparsity.
- It reduces buffer, computational overhead, and ADC noise accumulation, leading to significant energy savings and speed improvements.
- Empirical evaluations show that CADC delivers 11x–18x speedups and up to 22.9x energy-efficiency gains with minimal impact on classification accuracy.
Crossbar-Aware Dendritic Convolution (CADC) is an in-memory computing (IMC) technique for convolutional neural networks (CNNs) and spiking neural networks (SNNs) that introduces a biologically inspired nonlinearity at the level of crossbar-based partial sum (psum) generation. CADC addresses the system-level bottlenecks arising from partitioned convolutional layers across multiple crossbars by embedding a rectification function directly within each crossbar, thereby enhancing psum sparsity, reducing buffer and computational overhead, and minimizing signal degradation from analog-to-digital conversion (ADC) noise. Empirical evaluations demonstrate substantial system-level speedups and energy efficiency improvements, with negligible—sometimes positive—impact on classification accuracy (Dong et al., 27 Nov 2025).
1. Convolution Partitioning and the CADC Algorithm
Crossbar-based IMC architectures decompose convolutional layers into multiple segments due to size constraints. Given a convolutional weight tensor of shape , spatial and input channel unrolling produces a matrix. A crossbar of size can only accommodate rows, requiring the input dimension to be partitioned into segments.
In standard (vanilla) convolution (vConv), each segment produces a psum for each output channel : where . CADC introduces a dendritic nonlinearity—specifically, rectification via ReLU—on each crossbar’s output before accumulation: The accumulated output is then: Typically, , incurring no additional weight storage or computational cost. This zero-clamping function inside each crossbar outputs only non-negative psums to the next stage.
2. Psum Sparsity: Analysis and Implications
Let the pre-rectification psum for segment and channel be . For layer , psum sparsity is defined as: Averaging over output channels, nonzero psums remain. This sparsity directly reduces the buffer and transfer overhead (since fewer non-zero psums must be stored and moved), and also reduces accumulation overhead—zero entries can be skipped, yielding proportional reductions in cycles and energy.
Empirically measured mean sparsities are:
- LeNet-5 (MNIST): 80%
- ResNet-18 (CIFAR-10): 54%
- VGG-16 (CIFAR-100): 66%
- SNN (DVS Gesture): 88%
Consequently, buffer and transfer energy reductions of up to 29.3% and accumulation energy reductions of 47.9% are achieved for ResNet-18 on CIFAR-10.
3. ADC Quantization Noise and Signal Integrity
In IMC, each ADC invocation introduces quantization error with variance . Conventional vConv accumulates this error across segments: Since CADC suppresses negative psums, only terms contribute: Thus, root-mean-square noise is reduced by . For instance, (ResNet-18) yields about one-third less accumulated noise, correlating with minimal accuracy degradation—only 0.1% top-1 drop under 4-bit ADC quantization.
4. Experimental Results: Sparsity, Accuracy, and System Throughput
Sparsity and Classification Accuracy
CADC’s induced psum sparsity is highly correlated with downstream efficiency gains; it also has minimal impact on, or sometimes improves, classification accuracy. Measured statistics across various models and datasets are summarized below.
| Model–Dataset | Psum Sparsity | Accuracy Change (relative to vConv, best–worst) |
|---|---|---|
| LeNet-5 (MNIST) | 80% | +0.11% ~ +0.19% |
| ResNet-18 (CIFAR-10) | 54% | –0.04% ~ –0.27% |
| VGG-16 (CIFAR-100) | 66% | +0.99% ~ +1.60% |
| SNN (DVS Gesture) | 88% | –0.57% ~ +1.32% |
These results are consistent across crossbar sizes ( to ). Even with aggressive negative psum pruning, CADC typically matches or exceeds vConv’s accuracy.
System-Level Performance
For ResNet-18 on CIFAR-10 using a 65nm SRAM-based IMC macro:
- Crossbar/MAC/ADC: 725 TOPS/W (4b I/O, 2b weight)
- End-to-end throughput: 2.15 TOPS at 200 MHz
- Energy efficiency: 40.8 TOPS/W (normalized to 65 nm, 1.1 V)
- Speedup vs prior SRAM-IMC: –
- Energy-efficiency improvement: –
5. Architectural Implementation
CADC is realized on a twin-9T SRAM crossbar supporting ternary weights with decoupled read paths. The in-memory ADC (IMA) shares the bitcells, employing a ramp-based reference embedded in the conversion loop. The ReLU nonlinearity is enacted by control/timing of word lines: if the crossbar’s output voltage does not cross the ADC ramp threshold, the output is zeroed.
Further, only nonzero psums (with a bitmask) are buffered post-ADC; downstream accumulator logic skips zeros by consulting the mask. The entire macro is compact (0.5 mm at 65nm), with IMA occupying only 14.9% of the die—less than with SAR or conventional IMA-based ADCs.
6. Design Trade-offs and Prospective Enhancements
Hardware overhead for CADC is minimal: the crossbar and ADC bitcells are reused, with the zero-mask and skip-control logic imposing negligible area or power penalty. The choice of is model-dependent: ReLU is optimal for typical CNNs, while provided improvement for SNNs. CADC’s psum sparsity is synergistic with any zero-compression or sparse accumulation scheme and orthogonal to weight quantization or global pruning.
Potential future directions include learned or adaptive dendritic nonlinearities, finer-grained skipping mechanisms (e.g., per-bit), and expansion to alternative memory technologies such as RRAM or to other layer types (e.g., depthwise convolutions, transformer attention mechanisms). A plausible implication is that the core CADC principle could generalize to broader classes of in-memory compute and neuromorphic architectures.
7. Summary and Significance
Crossbar-Aware Dendritic Convolution leverages a biologically inspired ReLU-like nonlinearity applied at the granularity of crossbar-generated partial sums. By eliminating negative psums in-situ, CADC maximizes psum sparsity, reduces interconnect and accumulation workload, and mitigates ADC-related noise accumulation. These improvements translate to substantial empirical gains in throughput and energy efficiency, with robust model accuracy on representative benchmarks (Dong et al., 27 Nov 2025). The approach is compatible with standard digital and mixed-signal crossbars, incurs negligible hardware overhead, and is extensible to multiple network architectures and datasets.