CompreSSM: Efficient State Space Model Compression
- CompreSSM is a suite of techniques for compressing state space models using balanced truncation and selective gating to optimize memory, computation, and maintain accuracy.
- It employs in-training model order reduction by discarding low-energy state components, achieving robust performance with significantly reduced model dimensions.
- The method leverages information-theoretic rate-distortion trade-offs to dynamically balance compression and expressivity in diverse sequence modeling applications.
CompreSSM refers to methods and theoretical constructs for compressing state space models (SSMs) to minimize memory, computation, or storage without incurring significant loss in expressivity or performance. In contemporary literature, CompreSSM specifically denotes two overlapping but distinct research avenues: (1) in-training model order reduction via balanced truncation for discrete linear SSMs, most notably in "The Curious Case of In-Training Compression of State Space Models" (Chahine et al., 3 Oct 2025), and (2) selective memory compression using gating and information-theoretic rate-distortion trade-offs, framed as "Compressive Selective State Space Models" (Bhat, 2024). Both approaches target efficient long-context sequence modeling, providing algorithmic and theoretical tools to overcome the computational and representational bottlenecks typical of large SSMs.
1. State Space Model Compression: Problem Formulation
State space models process sequential data using recurrence: where is the hidden state, the input, and the output. High-dimensional is necessary for modeling long-range dependencies, but creates quadratic per-step update and storage costs. CompreSSM addresses the need to reduce —the state-space dimension—while maintaining or, in some cases, improving performance relative to a scratch-trained low-dimensional counterpart.
Model order reduction in SSMs is classically approached via balanced truncation, which exploits the controllability and observability structure encoded in the system matrices () to identify and discard low-energy state components. Modern techniques further introduce adaptive or selective compression mechanisms, such as input-conditioned gating, offering additional avenues for dynamic, data-driven memory savings (Bhat, 2024).
2. Balanced Truncation and In-Training Model Reduction
Balanced truncation is grounded in control theory and leverages the Hankel singular values (HSVs) of an LTI system to quantify the joint controllability and observability of state dimensions. Given the controllability () and observability () Gramians by the discrete Lyapunov equations,
the HSVs are the sorted square roots of the eigenvalues of . Balanced realization finds a similarity transform rendering , diagonal and equal, allowing truncation of the state to the directions with largest HSVs: with hyperparameter (Chahine et al., 3 Oct 2025). The truncated system inherits error guarantees.
CompreSSM (Chahine et al., 3 Oct 2025) uniquely integrates this reduction into stochastic gradient descent training. At scheduled checkpoints during early optimization (e.g., within the first 10% of steps), blockwise reduction is triggered, collapsing by computing balancing transforms and discarding low-impact directions. This dynamic in-training reduction leads to more robust and performant small models than standard slim-from-scratch training.
3. Selective Gating and Information-Theoretic Compression
A complementary paradigm, introduced in (Bhat, 2024), views compression as dynamic, selective retention of subspaces through gating: where is an input- and state-dependent vector of gates, and denotes elementwise multiplication. This mechanism implements a form of adaptive memory compression, reducing the effective dimensionality
at each time step. The formalism is grounded in information theory, analyzing the trade-off between the mutual information retained in the compressed state and the input history : Tunable regularization on promotes sparsity, directly managing this trade-off.
Theoretical results establish mean-square convergence under mild conditions and derive explicit rate-distortion bounds linking achievable memory savings to information retention.
4. Algorithmic Workflow and Practical Implementation
The in-training CompreSSM method (Chahine et al., 3 Oct 2025) is realized as follows:
- At each designated training checkpoint, extract SSM weights per block.
- Compute Gramians (, ) and HSVs.
- Determine reduced order satisfying the HSV energy threshold.
- Compute the balancing similarity transform .
- Transform to balanced coordinates, truncate to , and write back reduced matrices.
- Resume training with reduced SSM.
This process is integrated into standard machine learning frameworks, e.g., PyTorch, with utility modules handling schedule definition, matrix operations, and Lyapunov solvers. Hyperparameters include the energy threshold (e.g., 0.01–0.2), number of checkpointed compressions (3–5), and minimum allowed reduction fraction (default 0.95). A single epoch's learning-rate warmup typically hosts all compression checkpoints.
The selective gating CompreSSM approach (Bhat, 2024) implements gating networks parameterized by differentiable functions (e.g., sigmoid of affine maps of ), paired with regularization for controlled sparsity. Hyperparameterization targets desired rate-distortion envelope, and observed effective state dimension is routinely an order of magnitude beneath full .
5. Empirical Validation and Performance Characteristics
Empirically, in-training CompreSSM (Chahine et al., 3 Oct 2025) produces SSMs that are both smaller and more expressive than those directly trained at small . On CIFAR-10, for instance, an SSM achieves 86.5% accuracy; CompreSSM compresses to (84.4%, pp over scratch baseline) with less training time. On ListOps, final yields 48.3% (CompreSSM) versus 43.4% (vanilla). MNIST sees 95.9% (), compared to 92.6% for small-from-scratch.
Selective gating SSMs (Bhat, 2024) achieve accuracy and memory trade-offs surpassing standard RNNs/GRUs/LSTMs and classical SSMs, e.g., speedup and or better memory reduction at matching or higher accuracy across time-series, NLP, and signal tasks. The "gate-off" ablation confirms that the gating mechanism is the primary driver of compression without accuracy degradation.
A summary table of performance comparisons appears below.
| Dataset | CompreSSM Accuracy | Baseline/Small SSM | Memory (MB, CompreSSM) | Memory (MB, Baseline) |
|---|---|---|---|---|
| CIFAR-10 | 84.4% (=57) | 78.2% (=57) | — | — |
| ListOps | 48.3% (≈57) | 43.4% (≈57) | — | — |
| MNIST | 95.9% (≈13) | 92.6% (≈13) | — | — |
| Time-Series [Selective] | 92.1% | 90.3% (LSTM) | 250 | 400 |
| NLP [Selective] | 85.6% | 82.5% (LSTM) | 210 | 360 |
6. Theoretical Guarantees, Limitations, and Extensions
Balanced truncation guarantees input-output error bounded by the sum of discarded HSVs (see Antoulas, §7), and empirical studies support the stability and monotonic ordering of HSVs under gradient-based parameter updates. Selective gating models possess mean-square convergence under mild norm and Lipschitz assumptions and provide rate-distortion-theoretic performance bounds.
Practical limitations include sensitivity to hyperparameters (especially for gating and compression thresholds), theoretical non-constructiveness of rate-distortion inverses, and restricted applicability of linear analysis to nonlinear SSMs. Extensions under consideration include nonlinear and hybrid SSMs, adaptive gating for online learning, multi-task selective gating, and integration of constant-time SSM blocks for ultra-long-context applications (Chahine et al., 3 Oct 2025, Bhat, 2024).
7. Code Availability and Usage Guidelines
Reference implementation for in-training CompreSSM is available at github.com/camail-official/compressm (Chahine et al., 3 Oct 2025). Key components manage compression scheduling, blockwise model reduction, and fast Lyapunov-solving routines. Best practices include setting early-warmup reduction intervals, tuning and minimum reduction fraction based on validation, and employing blockwise processing for models with many SSM layers.
For selective compression, regularization magnitudes governing gate sparsity and gating function architecture should be tuned via validation in accord with target rate-distortion profiles. Use of penalties and restriction to small Lipschitz constants is advised for convergence stability (Bhat, 2024).
CompreSSM, viewed both as a suite of practical methods and a set of theoretical bounds, forms a rigorous, high-performance compression strategy for modern SSM-based sequence models.