SEANet-Style Vector-Quantizer
- The paper introduces BrainTokMix, a SEANet-style vector-quantizer that achieves end-to-end causal compression of high-dimensional MEG data with a 17× compression ratio.
- It employs a four-stage residual vector quantization using 16,384-codeword codebooks and causal convolutions to tokenize spatiotemporal neural signals efficiently.
- Empirical results show high fidelity with MAE of 0.203 and PCC of 0.944, demonstrating stable, long-context generation and effective Transformer integration.
A SEANet-style vector-quantizer, exemplified by the BrainTokMix architecture, is a multi-stage residual vector quantization (RVQ) system designed to encode high-dimensional, multichannel time series such as magnetoencephalography (MEG) recordings into discrete token streams suitable for autoregressive sequence modeling. Originating as a simplified variant of the SEANet codec used in prior work (notably BrainOmni), BrainTokMix achieves causality, computational efficiency, and direct end-to-end compression of spatiotemporal neural data. Its innovations address encoder/decoder structure, codebook interaction, and seamless integration with large-scale decoder-only Transformers for next-token prediction and generative modeling of neurophysiological signals (Csaky, 28 Jan 2026).
1. Architecture and Structural Modifications
The BrainTokMix quantizer operates on MEG segments , where (source-space channels) and (10.24 s at 100 Hz). It employs a strictly causal SEANet encoder composed exclusively of convolutional layers and downsampling (by a factor of 4), yielding output with and . The channels are reshaped into parallel "neuro-streams" (each ).
Key structural differences from the original SEANet include:
- Removal of per-sensor attention and sensor embedding modules.
- Elimination of LSTM temporal or sensor-wise bottlenecks.
- Exclusively multichannel, causal convolutions for channel mixing in both encoder and decoder.
This design yields a model that is approximately 3× faster to train, simplifies the architecture substantially, and maintains end-to-end causality for real-time applications (Csaky, 28 Jan 2026).
2. Mathematical Formulation and Quantization Process
Quantization is performed via a four-stage () residual vector quantizer. For a latent , the quantization proceeds as follows:
- For each stage , a codebook () is used.
- At each stage, the closest codeword is assigned by minimizing the squared Euclidean distance to the current residual .
- The quantized representation is
and the quantized tensor is passed through the causal SEANet decoder.
Training minimizes a composite loss:
-
- where is the channel-averaged Pearson correlation, and are norms on the FFT magnitude and phase, respectively.
- is a commitment loss using the stop-gradient operator .
Codebook updates use the straight-through estimator. During the backward pass, gradients are allowed to flow through latent quantization as . Codebooks are updated using the gradients of and reconstruction/frequency losses (Csaky, 28 Jan 2026).
3. Data Preprocessing and Discrete Tokenization
The end-to-end pipeline for converting MEG to tokens comprises:
- Per-session preprocessing: interference rejection (Maxwell or gradient-based), causal line-noise notch, causal 1–50 Hz bandpass, resampling to 100 Hz, bad-channel interpolation, projection to a standard anatomical space (fsaverage, regions of interest).
- Standardization (median/IQR channel-wise scaling), robust clipping to .
- Segmentation into "good" segments (≥ 60 s, ), with sessions retained only if ≥ 80% windows are "good".
For each segment, non-overlapping windows of 10.24 s are tokenized. Each window is encoded, quantized, and the RVQ indices are collected. For , , , this yields tokens/window, corresponding to a tokenization rate of 400 tokens/sec—a compression ratio of approximately 17× relative to a raw flattened stream (Csaky, 28 Jan 2026).
4. Integration with Transformer Architectures
Tokenized outputs are flattened from their initial grid structure to a 1D sequence:
with token .
Transformer integration details:
- Separate embedding tables () per RVQ stage.
- Multimodal rotary position embeddings (MRoPE) across axes to enable tri-axial attention.
- Decoder head tying: output softmax head for stage is weight-tied to the embedding table for stage (cyclically).
- Vocabulary size: .
The model is trained on CamCAN and OMEGA datasets (approximately tokens), using next-token cross-entropy and AdamW optimizer. For generation, 1-minute context windows (24,576 tokens) are used, and sampling proceeds by rolling out up to 4 minutes with overlap-add reconstruction from decoded MEG windows (Csaky, 28 Jan 2026).
5. Empirical Performance and Ablation Results
On held-out MOUS data, the quantizer achieves:
- Mean absolute error (MAE): 0.203
- Pearson correlation coefficient (PCC): 0.944
- FFT amplitude error: 0.0835
- Codebook usage perplexity:
Power spectral density (PSD) and spatial covariance reconstructions closely approximate true MEG signals, with mild attenuation above 40 Hz. Long-horizon rollouts (4 min beyond a 1 min prompt) exhibit persistent on-manifold stability, with out-of-envelope rate (OER) remaining under 10–20%. Conditional specificity is demonstrated by significant prefix-divergence gaps under prompt-swap controls; e.g., covariance distance gap vs. prompt-swap post-4 min ( s) is [0.063, 0.173], and vs. real-real [0.046, 0.135]. Shortening the context window (61.44 s to 30.72 s) increases OER and reduces the prefix-swap gap on all metrics, indicating a reduction in generative fidelity with decreased context (Csaky, 28 Jan 2026).
6. Algorithmic Workflow and Pseudocode
The core RVQ algorithm runs as follows:
- Forward pass:
- Find .
- Add to quantized latent.
- Update residual .
- 3. Accumulate quantized vectors.
- Losses are computed as:
- Backward pass: Gradients propagate with the straight-through estimator. Codebooks receive updates from both and the reconstruction/frequency components.
A compact pseudocode representation is presented in the original work and directly transcribes the forward/backward workflow (Csaky, 28 Jan 2026).
7. Significance and Application
SEANet-style vector-quantization, as realized in BrainTokMix, provides an efficient, scalable, and causal method for transforming continuous, high-dimensional neurophysiological data into discretized token streams. These representations facilitate language-model-scale autoregressive modeling and long-context generation for neuroscientific signals. The system achieves substantial compression (17×), preserves signal fidelity, and enables stable, prompt-specific long-horizon generation with strong generalization across datasets. Empirical ablations underscore the importance of extended context length for maintaining spatiotemporal specificity.
A plausible implication is that SEANet-style quantization, particularly with simplified channel mixing and multi-stage RVQ bottlenecks, defines a new standard for integrating modern autoregressive architectures with biomedical time series. This approach bridges advances in neural signal processing and sequence modeling, supporting both fundamental research and practical applications in computational neuroscience (Csaky, 28 Jan 2026).