Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 65 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Entropy-Based Dynamic Aggregation Framework

Updated 6 September 2025
  • Entropy-Based Dynamic Aggregation Framework is a method that leverages predictive entropy to adaptively group semantic speech tokens while preserving key information.
  • It utilizes cross-attentive local encoding to refine token group embeddings, balancing efficiency and accuracy in capturing speech nuances.
  • Adjusting the entropy threshold enables flexible trade-offs between compression and detail retention, achieving competitive performance in ASR, ST, and VC tasks.

A systematic framework for entropy-based dynamic aggregation enables the adaptive compression and representation of semantic speech features by leveraging predictive uncertainty. This methodology targets efficient mapping of continuous speech waveforms into compressed, information-preserving token sequences, aligning the temporal granularity of discrete representations with the underlying informational content of spoken language. Central to the design is the use of predictive entropy, computed from next-token LLMs trained on large-scale unlabeled corpora, to adaptively determine token grouping boundaries and thus balance redundancy and information loss. The resulting representations exhibit high compression ratios and reduced computational cost, with competitive or superior performance on automatic speech recognition (ASR), speech-to-text translation (ST), and voice conversion (VC) tasks relative to fixed-rate tokenizers.

1. Predictive Entropy-Based Token Aggregation

The proposed framework operates on sequences of speech-derived discrete tokens (e.g., HuBERT-k-means assignments) u1,u2,,uNu_1, u_2, \ldots, u_N. An autoregressive LLM estimates the conditional probability distribution p(uiu1:i1)p(u_i | u_{1:i-1}) for each token position ii. The predictive entropy,

H(ui)=v=1Kp(ui=vu1:i1)logp(ui=vu1:i1),H(u_i) = -\sum_{v=1}^K p(u_i = v\,|\,u_{1:i-1}) \log p(u_i = v\,|\,u_{1:i-1}),

serves as a local uncertainty measure. Boundary selection for dynamic aggregation is governed by a global threshold θg\theta_g such that clusters of adjacent tokens with H(ui)<θgH(u_i) < \theta_g are merged; equivalently, a segment boundary is placed whenever H(ui)θgH(u_i) \geq \theta_g. Relative criteria (e.g., H(ui)H(ui1)θrH(u_i) - H(u_{i-1}) \geq \theta_r for some θr\theta_r) may be used in tandem to capture local entropy surges.

Consequently, token regions with low uncertainty—where the LM is confident in its predictions—are aggregated into single units, thereby aligning the representation granularity with predictable regions of semantic or phonetic continuity. This dynamic segmentation enables flexible control over compression ratios: increasing θg\theta_g produces coarser groupings and higher compression, while reducing θg\theta_g yields finer-grained, higher-fidelity segmentations.

2. Cross-Attentive Local Encoding

After entropy-based grouping, tokens within each segment are further processed by a cross-attentive local encoder to produce refined group-level embeddings. Initialization involves max pooling over tokens in each group to form the starting embedding pj(0)p_j^{(0)}. Each cross-attention layer updates group embedding pj()p_j^{(\ell)} via:

  • Query: qj()=LayerNorm(WQpj(1))q_j^{(\ell)} = \mathrm{LayerNorm}(W_Q p_j^{(\ell-1)})
  • Keys/values: ki()=LayerNorm(WKhi(1))k_i^{(\ell)} = \mathrm{LayerNorm}(W_K h_i^{(\ell-1)}), vi()=LayerNorm(WVhi(1))v_i^{(\ell)} = \mathrm{LayerNorm}(W_V h_i^{(\ell-1)})
  • Attention weights:

αj,i()=exp((qj())ki()/d)kexp((qj())kk()/d)\alpha_{j,i}^{(\ell)} = \frac{\exp\left((q_j^{(\ell)})^\top k_i^{(\ell)} / \sqrt{d}\right)}{\sum_k \exp\left((q_j^{(\ell)})^\top k_k^{(\ell)} / \sqrt{d}\right)}

  • Group update:

pj()=pj(1)+WO[iαj,i()vi()]p_j^{(\ell)} = p_j^{(\ell-1)} + W_O \left[ \sum_{i} \alpha_{j,i}^{(\ell)} v_i^{(\ell)} \right]

Here, hi(1)h_i^{(\ell-1)} are the token embeddings for group gjg_j at the previous layer, WQW_Q, WKW_K, WVW_V, WOW_O are learned projections, and dd denotes the attention dimension. Multi-layer cross-attention refines each group's summarization of its constituent tokens, ensuring that the resulting representation retains both local detail and contextualized semantics.

3. Semantic Token Pretraining and Aggregation Workflow

The system is initialized by training a lightweight next-token LLM on sequences of discretized speech tokens (from a pre-trained HuBERT model and k-means quantization). This model is trained via a next-token prediction objective to capture frequent token patterns and the speech domain’s inherent temporal dependencies.

Semantic speech representations are then obtained by passing new audio through the HuBERT encoder, quantizing the output, and subjecting the resulting token stream to dynamic aggregation using the trained LLM’s entropy predictions.

After grouping, the cross-attentive local encoder produces a compressed sequence of group-level embeddings, whose rate and granularity are determined by the entropy threshold(s). This flexibility allows practitioners to tune the framework to target specific downstream requirements.

4. Quantitative Impact of Entropy Threshold Adjustment

By varying the global entropy threshold θg\theta_g, the framework smoothly trades off between sequence length and information retention. Lower θg\theta_g leads to finer tokenization (\sim24 Hz), preserving phonetic information crucial for tasks like voice conversion, but at greater computational cost and potential redundancy. Higher values (e.g., 7 Hz) yield more compressed, semantic-level tokens, with risk of losing crucial details detrimental to certain downstream applications.

Empirical results indicate that moderate compression (\sim15 Hz) achieves optimal performance across ASR (WER: 5.6%, CER: 2.9%), ST (BLEU: 31.5), and VC (Q-MOS/S-MOS comparable to dense baselines), outperforming fixed-pooling or naive deduplication baselines, which cannot flexibly navigate the trade-off between redundancy and semantic coverage.

5. Comparison with Traditional and Fixed-Interval Pooling

The entropy-based aggregation strategy distinctly improves upon approaches that use either fixed-length pooling or simple deduplication (removing consecutive duplicates), which lack adaptability to the time-varying informational structure of speech. Fixed-pooling may under-segment unpredictable (high-entropy) regions and over-segment stable regions, whereas entropy-guided aggregation ensures finer resolution in challenging speech segments and maximized compression where permissible.

Unlike fixed-rate representation, dynamic entropy-based aggregation aligns with the semantic flow of spoken content, more naturally reflecting word boundaries and semantic transitions, leading to both computational efficiency and superior accuracy in downstream tasks.

6. Mathematical Formulation and Segmentation Algorithm

The dynamic aggregation can be formally described by segmenting the token sequence u1:Nu_{1:N} at boundary indices b0=0<b1<...<bM=Nb_0 = 0 < b_1 < ... < b_M = N, where each boundary is defined as either H(ubj)>θgH(u_{b_j}) > \theta_g or H(ubj)H(ubj1)>θrH(u_{b_j}) - H(u_{b_j-1}) > \theta_r. Each segment gj={ubj1+1,,ubj}g_j = \{ u_{b_{j-1}+1}, \ldots, u_{b_j} \}.

The overall dynamic aggregation algorithm can be concisely detailed as:

  1. For each i1Ni \in 1 \ldots N, compute H(ui)H(u_i) using the LM.
  2. Identify segmentation points where H(ui)H(u_i) exceeds the global or relative threshold.
  3. For each resulting group gjg_j, run the cross-attentive local encoder to derive pjp_j.
  4. Output the sequence {p1,p2,...,pM}\{p_1, p_2, ..., p_M\} as the compressed semantic representation.

7. Operational and Research Implications

This entropy-based dynamic aggregation methodology is applicable to any scenario in which representational granularity of sequential symbolic data must be adaptively tuned to the underlying informational content—especially where high redundancy and variable semantic density occur, such as in speech, but plausibly also in other modalities like text or music. Future work may extend the entropy-guided segmentation paradigm to hierarchical compression or bidirectional uncertainty measures and investigate integration with downstream sequence modeling architectures to further align compression rates with task intent and performance.

This approach enables practitioners to flexibly adjust model compression rates post hoc, optimize computational efficiency for large-scale deployments, and preserve end-to-end accuracy for both recognition and generation tasks, demonstrating a rigorous route to semantically coherent, entropy-controlled speech representation learning (Zuo et al., 30 Aug 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Entropy-Based Dynamic Aggregation Framework.