Papers
Topics
Authors
Recent
2000 character limit reached

Equivariant-Aware Structured Pruning

Updated 28 November 2025
  • The paper demonstrates how preserving C4 rotation equivariance in CNNs enables robust model compression while aggressively reducing parameter count.
  • It decouples equivariant R2Conv layers from fully connected layers, applying saliency-based structured pruning and adaptive fine-tuning to recover accuracy.
  • Adaptive fine-tuning combined with dynamic INT8 quantization yields compact, transformation-invariant models ideal for edge deployment in tasks like satellite imagery analysis.

Equivariant-aware structured pruning is a principled methodology for model compression that preserves group equivariance—specifically rotation-equivariance via the cyclic group C4C_4—within convolutional neural networks designed for geometric robustness, while aggressively reducing parameter count in a manner suited for edge deployment. It operates by explicitly decoupling group-theoretic structure from unstructured components in network architecture, combining group-equivariant convolutional neural networks (G-CNNs), targeted structured pruning of fully connected layers, adaptive fine-tuning, and quantization. This yields compact, transformation-invariant models that maintain high accuracy and robustness under geometric transformations, with practical deployment relevance for domains such as satellite imagery analysis and geometric vision tasks (Alnemari, 21 Nov 2025).

1. Mathematical and Architectural Foundations

Group equivariant neural networks enforce symmetry constraints on network operations by leveraging representations of transformation groups, so that feature maps transform consistently under group actions. In this context, the cyclic group C4={e,r,r2,r3}C_4 = \{e, r, r^2, r^3\} (representing rotations by multiples of 90°) acts on input images as discrete rotations. For input feature map f:Z2RCinf: \mathbb{Z}^2 \to \mathbb{R}^{C_{\rm in}}, an equivariant filter κ:GRCin×Cout\kappa: G \to \mathbb{R}^{C_{\rm in}\times C_{\rm out}} realizes a group convolution: [κf](g)=hGκ(h1g)f(h),gG[\kappa * f](g) = \sum_{h \in G} \kappa(h^{-1}g) f(h)\,, \qquad g \in G with the equivariance property: Tg[κf]=κTg[f],gGT_g[\kappa * f] = \kappa * T_g[f],\quad\forall g \in G where Tgf(h)=f(g1h)T_g f(h) = f(g^{-1}h).

The e2cnn library is used to implement group convolutions in practice, representing channels with regular representations of C4C_4 and encoding the algebraic action via permutation matrices ρreg(g)\rho_{\rm reg}(g). The key entwining constraint for filters is

κ(gx)=ρout(g)κ(x)ρin(g)1\kappa(g \cdot x) = \rho_{\rm out}(g) \kappa(x) \rho_{\rm in}(g)^{-1}

where ρin\rho_{\rm in} and ρout\rho_{\rm out} are representations of C4C_4 coinciding with network layer input and output spaces. The R2Conv layers in e2cnn thereby guarantee that feature maps remain equivariant throughout the network (Alnemari, 21 Nov 2025).

2. Layer Structure and Pruning Strategy

The architectural decomposition of equivariant CNNs, as instantiated in e2cnn, separates the R2Conv layers—which enforce C4C_4 equivariance—from conventional fully connected (FC) nn.Linear layers which remain agnostic to geometric constraints. The equivariant-aware structured pruning protocol leaves all R2Conv layers unaltered, preventing symmetry-breaking. Structured pruning is selectively applied to FC layers only, which typically dominate parameter count but do not impact geometric consistency.

The pruning criterion for linear layers is predicated on neuron saliency, computed as the vector norm: si=W[i,:]2s_i = \|W[i,:]\|_2 for each output neuron ii, where WW denotes the FC layer weight matrix. Neurons with sis_i below a data-driven threshold τ\tau (set by the desired pruning ratio pp) are removed. This binary keep-mask MiM_i determines which neurons survive: Mi=1(siτ),iMi=(1p)CoutM_i = \mathbf{1}(s_i \geq \tau),\quad \sum_i M_i = \lfloor(1-p)C_{\rm out}\rfloor The resulting pruned weight matrix WW' and biases bb' are appropriately dimensioned. All downstream layer dimensions are adjusted to remain consistent. By construction, equivariance is preserved because R2Conv filters, which enforce the C4C_4 group structure, remain untouched (Alnemari, 21 Nov 2025).

3. Adaptive Fine-Tuning and Quantization

Because structured pruning induces accuracy degradation—sometimes severe, especially at high pruning ratios—an adaptive fine-tuning schedule is triggered whenever the observed drop in post-pruning validation accuracy Δ=ApreApost\Delta = \mathcal{A}_{\rm pre} - \mathcal{A}_{\rm post} exceeds 2%. Fine-tuning proceeds with the Adam optimizer at reduced learning rate (η=103\eta=10^{-3}), a ReduceLROnPlateau scheduler (factor 0.5, patience 10 epochs), and early stopping when recovery is satisfactory (drop <1%<1\% or <2%<2\%), with a strict cap at 50 epochs.

After recovery, dynamic INT8 quantization is performed on all pruned FC layers. The quantization mapping is

q(x)=round(xS)+Zq(x) = \mathrm{round}\left(\frac{x}{S}\right) + Z

with per-tensor scale SS and zero-point ZZ minimizing the reconstruction error. PyTorch’s dynamic quantize routines convert float weights to INT8, often yielding a fourfold reduction in memory for these layers and in some cases slight gains in runtime accuracy (Alnemari, 21 Nov 2025).

4. Implementation Pipeline and Experimental Protocol

The complete end-to-end pipeline consists of training a C4C_4-equivariant R2CNN, knowledge distillation into a more efficient “student” architecture, equivariant-aware structured pruning of FC layers, adaptive fine-tuning as indicated by validation drops, and INT8 quantization. Final models are validated and deployed with minimal loss of geometric robustness or predictive performance.

A standard pseudocode flow is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
model_eq = R2EquivariantCNN(C4); train(model_eq)
student = EfficientTarget(); distill(teacher=model_eq, student)
for layer in student.layers:
    if is_linear(layer):
        prune_linear(layer, ratio=p, saliency_norm=2)
    # R2Conv layers remain intact
check accuracy drop Δ
if Δ > 2%:
    # 4. Adaptive Fine-Tuning
    fine_tune(student, lr=1e-3,
              scheduler=ReduceLROnPlateau(0.5,10),
              early_stop_criteria=[<1%,<2%], max_epochs=50)
quantized = torch.quantization.quantize_dynamic(
    student.cpu(), {nn.Linear}, dtype=qint8)
evaluate(quantized); deploy(quantized)

Benchmarks include EuroSAT, Rotated MNIST, and CIFAR-10 datasets, with architectures spanning base CNNs (6×Conv2d + 2×FC) and G-CNNs (C4C_4; 2×R2Conv + 2×FC). Hyperparameters are selected for efficiency and convergence: Adam with initial $0.01$ learning rate, batch size 128, 50 training epochs with early stopping, distillation temperature T=4T{=}4, and pruning ratios 30% and 50% applied to FC layers (Alnemari, 21 Nov 2025).

5. Quantitative Results and Analysis

Empirical evaluations reveal substantial parameter reduction with minimal loss of accuracy and maintained geometric robustness. The following summarizes the EuroSAT pipeline:

Stage Accuracy (%) Size (MB) Reduction
Eq. CNN (C4C_4) Original 97.37 2.03
+ Distillation (29.3%↓ params) 93.33 1.43 29.3%
+ 30% Pruning (before FT) 23.22 1.43 29.3%
+ Adaptive FT (30%) 93.85 1.43 29.3%
+ 50% Pruning (before FT) 10.41 1.01 50.4%
+ Adaptive FT (50%) 93.89 1.01 50.4%
+ INT8 Quant (50% pruned) 94.52 0.06* 87.6%

*Assuming 1 B per INT8 parameter.

For cross-dataset performance (Top-1 accuracy):

Model RotMNIST CIFAR-10 EuroSAT
CNN (baseline) 98.95 80.71 93.81
G-CNN (C4C_4) 99.40 82.26 97.37
G-CNN + prune (50%) 98.80 81.71 93.89
G-CNN + prune + quant 98.71 81.21 94.52

Key findings include:

  • Distillation on its own delivers a 29.3% parameter reduction at only a 4.04% absolute accuracy drop.
  • Pruning 50% of FC neurons initially results in pronounced accuracy drop (<11%), but adaptive fine-tuning fully recovers to 93.89%.
  • Quantized models after pruning and fine-tuning exhibit 97.0% retention of original accuracy with an 87.6% overall parameter reduction (Alnemari, 21 Nov 2025).

6. Significance and Theoretical Guarantees

Equivariant-aware structured pruning achieves mathematically certified symmetry preservation by restricting pruning to layers that do not participate in group-theoretic transformations, thereby maintaining C4C_4 equivariance in the final model. Compression is effected without altering the R2Conv layers, ensuring that transformation invariance is retained throughout—even for extreme pruning ratios and quantization. All parameter reduction is extracted from linear "head" layers, which typically comprise the majority of parameters but do not mediate geometric equivariance. The framework thus bridges the gap between group-theoretic neural design and deployment constraints in real-world edge scenarios, providing reproducibility, significant compression, and geometric robustness in a unified pipeline (Alnemari, 21 Nov 2025).

A plausible implication is that, for tasks demanding invariance to discrete planar rotations, equivariant-aware structured pruning can yield models whose deployment on resource-constrained hardware is both principled and robust, surpassing non-equivariant efficient architectures in transformation-robust settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Equivariant-Aware Structured Pruning.