Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 180 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 42 tok/s Pro
GPT-4o 66 tok/s Pro
Kimi K2 163 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

DF-Conformer: Dilated FAVOR for Speech Enhancement

Updated 10 November 2025
  • DF-Conformer is a mask-based sequential model for speech enhancement that replaces traditional MHSA with linear FAVOR+ attention and employs exponentially dilated depthwise convolutions.
  • The architecture integrates a Conv-TasNet-inspired encoder with recursive DF-Conformer blocks to achieve improved SI-SNRi and competitive real-time factors over established baselines.
  • Recent extensions using Hydra state-space models address FAVOR+ limitations by enhancing content accuracy while maintaining efficient, linear complexity for long utterances.

The Dilated FAVOR Conformer (DF-Conformer) is a mask-based sequential model for single-channel speech enhancement (SE), integrating the Conformer block’s architectural motifs with both linear-complexity global attention and exponentially dilated depthwise convolution. Originating as an augmentation of Conv-TasNet’s time-dilated convolutional (TDCN) architectures, the DF-Conformer is characterized by its replacement of traditional multi-head self-attention (MHSA) with FAVOR+—a positive orthogonal random feature method—yielding linear time and memory complexity. Simultaneously, the block replaces standard convolution with exponentially dilated depthwise convolution to scale the local receptive field. This approach allows the architecture to efficiently model long-range dependencies, expanding the effective receptive field while maintaining tractable resource requirements. Empirical benchmarks show that the DF-Conformer achieves improved scale-invariant signal-to-noise ratio (SI-SNRi) and competitive real-time factors relative to established baselines. More recent investigations have critically analyzed the limitations of FAVOR+ and demonstrated that structured state-space sequence models (Hydra, a bidirectional extension of Mamba) can further improve performance while keeping linear complexity (Koizumi et al., 2021, Seki et al., 4 Nov 2025).

1. Architectural Design and Pipeline

The DF-Conformer’s processing pipeline accepts a raw audio sequence xRTx \in \mathbb{R}^T and proceeds as follows:

  • Encoding: The audio is analyzed by a trainable encoder filterbank as in Conv-TasNet; typical parameters are a window size of 2.5 ms and a hop size of 1.25 ms, yielding an encoded matrix Enc(x)RN×De\text{Enc}(x) \in \mathbb{R}^{N \times D_e} with De=256D_e = 256.
  • Mask Prediction: The encoded representation is input to a mask prediction network M()M(\cdot), comprised of stacked DF-Conformer blocks, which produces a time-frequency mask M[0,1]N×DeM \in [0,1]^{N \times D_e}.
  • Masking and Decoding: The mask is applied element-wise in the embedding space, and the masked representation is decoded using a learnable decoder with an overlap-add mechanism:

$y = \Dec(\Enc(x) \odot M(\Enc(x)))$

  • The sole change introduced by DF-Conformer, relative to TDCN++ or Conv-TasNet, is the replacement of TDCN blocks in the mask-prediction network with DF-Conformer blocks $2106.15813$.

The mask-prediction network is specified recursively: - $Z^{0} = \text{Dense}_1(\Enc(x))$ - For i=1...Li=1...L: - Set dilation d=2(i1modLs)d=2^{(i-1 \bmod L_s)}. - Zi=Zi1+DF-ConformerBlock(Zi1;d)Z^i = Z^{i-1} + \text{DF-ConformerBlock}(Z^{i-1}; d) - Final mask: M=σ(Dense2(ZL))M = \sigma(\text{Dense}_2(Z^L)).

The stack uses L=RLsL=R \cdot L_s blocks (with RR repeats, LsL_s distinct dilation values).

2. DF-ConformerBlock: Internal Mechanisms

Each DF-ConformerBlock fuses local and global context through:

  • Macaron-style Feed-Forward Integration: Two feed-forward modules, each applied with $1/2$ residual scaling before and after the other modules.
  • Linear-Attention Module: Replaces full softmax MHSA with FAVOR+, providing global context at linear complexity.
  • Dilated Depthwise Convolution: Depthwise 1-D convolution with exponentially increasing dilation dd, expanding the temporal receptive field as the block stack deepens.

The explicit structure is: 1. z1=z+12FeedForwardModule(z)z_1 = z + \frac{1}{2}\,\text{FeedForwardModule}(z) 2. z2=z1+MhsaFavorModule(z1)z_2 = z_1 + \text{MhsaFavorModule}(z_1) 3. r=GLU(Dense(LayerNorm(z2)))r = \text{GLU}(\text{Dense}(\text{LayerNorm}(z_2))) 4. r~=DepthwiseConv1D(r;kernel=K,dilation=d)\tilde{r} = \text{DepthwiseConv1D}(r; \text{kernel}=K, \text{dilation}=d) 5. z3=z2+Dropout(Dense(Swish(BatchNorm(r~))))z_3 = z_2 + \text{Dropout}(\text{Dense}(\text{Swish}(\text{BatchNorm}(\tilde{r})))) 6. z4=z3+12FeedForwardModule(z3)z_4 = z_3 + \frac{1}{2}\,\text{FeedForwardModule}(z_3) 7. output=LayerNorm(z4)\text{output} = \text{LayerNorm}(z_4)

Dilated convolution follows:

y[t]=k=0K1wkx[tdk]y[t] = \sum_{k=0}^{K-1} w_k \cdot x[t - d\,k]

with kernel length K=3K=3–$5$ (as in TDCN++) and exponentially increasing dd.

3. FAVOR+ Attention: Formulation and Trade-offs

FAVOR+ is a linear-complexity approximation to softmax-attention in MHSA. The canonical softmax attention for queries, keys, and values (Q,K,V)RN×D(Q, K, V) \in \mathbb{R}^{N \times D} is:

SA(Q,K,V)=softmax(QK)V\text{SA}(Q,K,V) = \text{softmax}(QK^\top) V

FAVOR+ replaces the softmax with a random-feature map ϕ\phi:

sa(Q,K,V)D1ϕ(Q)(ϕ(K)V)\operatorname{sa}(Q,K,V) \approx D^{-1} \phi(Q) (\phi(K)^\top V)

with D=diag[ϕ(Q)(ϕ(K)1N)]D = \operatorname{diag} [\phi(Q) (\phi(K)^{\top}1_{N})], and ϕ:RDR+Dr\phi : \mathbb{R}^D \rightarrow \mathbb{R}^{D_r}_+ constructed from positive orthogonal random features.

  • Complexity: FAVOR+ attention in DF-Conformer scales as O(NDDr)O(NDD_r) time and O(NDr)O(ND_r) memory.
  • Limitations: Empirical results indicate two primary limitations:

    1. Blurring and Low-Rank Patterns: The random feature approximation can lead to flattened, less selective attention distributions, hindering fine-grained alignment.
    2. Semantic Confusion: Non-injectivity of ϕ\phi allows semantically distinct queries to exhibit nearly identical attention rows, reducing feature space expressivity.

A plausible implication is that these approximations, while beneficial for efficiency, trade-off some representational focus compared to full softmax attention (Seki et al., 4 Nov 2025).

4. Computational Complexity and Empirical Performance

The DF-Conformer’s resource profile is characterized as follows:

Model Params (M) SI-SNRi (dB) RTF (CPU)
TDCN++ 8.75 14.10 0.10
Conv-Tasformer 8.71 14.36 0.25
DF-Conformer-8 8.83 14.43 0.13
iTDCN++ 17.6 14.84 0.22
iDF-Conformer-8 17.8 15.28 0.26
iDF-Conformer-12 37.0 15.93 0.46
  • Baseline Comparisons: DF-Conformer-8 outperforms TDCN++ (+0.33 dB SI-SNRi) at similar real-time factor (RTF) and parameter count.

  • Scalability: Linear time scaling with sequence length NN is preserved for both dilated convolutional and FAVOR+ attention modules. This architecture thus supports processing of long utterances not tractable with quadratic-complexity attention.
  • Ablations:
    • Removing dilated convolution (“F-Conformer-8”) drops SI-SNRi by $0.62$ dB, highlighting the importance of both local and global modules.
    • Substitute FAVOR+ with standard MHSA substantially increases computational cost, with only marginal SI-SNRi improvement (depending on model capacity).

5. Training Regimen and Hyperparameters

DF-Conformer models have been trained on large-scale noisy speech corpora (over 4 million examples; 3,396.8 hours) in a single-channel setup. Notable settings:

  • Encoder/Decoder: Conv-TasNet filterbanks (2.5 ms window / 1.25 ms hop).
  • Loss: Negative log-thresholded SNR,

L=10log10s2sy2+τs2,  τ=1030/10\mathcal{L} = -10 \log_{10} \frac{ \|s\|^2 }{ \|s-y\|^2 + \tau\|s\|^2 }, \; \tau = 10^{-30/10}

with a joint speech/noise masking loss (0.8/0.2 weighting) and mixture-consistency projection.

  • Optimizer: Adam (weight decay=1e6\text{weight decay}=1\mathrm{e}{-6}), g2-norm\operatorname{g2-norm} clipping at $5.0$.
  • LR Schedule: η(n)=Db0.5min(n/250001.5,n0.5)\eta(n) = D_b^{-0.5} \min ( n / 25000^{1.5}, n^{-0.5} ).
  • Batching: 500k steps on 128 TPUv3 cores, batch size $512$; EMA decay $0.9999$.

Model instantiations (non-iterative):

  • DF-Conformer-8: L=8L=8 blocks, Ls=4L_s=4, Db=216D_b = 216, Dr=384D_r=384, $6$ heads.
  • DF-Conformer-12: L=12L=12, Db=256D_b=256, $8$ heads.

6. Extensions: Hydra and State-Space Mixing

Recent research identifies and addresses the random feature-induced limitations of FAVOR+, proposing a replacement in the form of bidirectional state-space sequence models (Seki et al., 4 Nov 2025):

  • Hydra Module: Extends the (unidirectional) Mamba mixer to a bidirectional, selective, structured state-space model within the Conformer block. For input XRT×dX \in \mathbb{R}^{T \times d},

    • State-space recurrence (channel-wise):

    ht=Atht1+btxt,yt=cthth_t = A_t h_{t-1} + b_t x_t,\quad y_t = c_t^\top h_t

    with AtA_t, btb_t, ctc_t dynamically generated per input via a light gating network. - Matrix mixer view: Produces a semiseparable or quasiseparable linear transformation with explicit forward, backward, and diagonal (self-interaction) structure.

  • Integration: Drop-in substitution for FAVOR+ in DF-Conformer block: same macaron architecture, with Hydra replacing call to MhsaFavorModule.
  • Complexity: Retains O(TN2)O(TN^2) per channel; NN typically much smaller than sequence length TT.
  • Empirical Advantages: Hydra eliminates the blurring and semantic confusion of FAVOR+, preserving exactness and bidirectional mixing capacity.
  • Speech Enhancement Results: On DAPS (Seki et al., 4 Nov 2025),
Model DNSMOS UTMOS Speaker Similarity Content Acc. (%)
Softmax 3.46 3.53 0.83 87.88
FAVOR+ 3.44 3.33 0.79 88.24
Bi-Mamba 3.44 3.27 0.81 88.04
Hydra 3.44 3.48 0.83 88.95

Hydra matches or exceeds softmax on most metrics and substantially outperforms FAVOR+ on content accuracy. This suggests the state-space approach delivers stronger sequential modeling while sustaining linear scaling. Robustness to longer sequence lengths is observed for Hydra versus significant degradation for softmax and mildly suboptimal but constant performance for FAVOR+.

7. Significance, Impact, and Outlook

The DF-Conformer demonstrates that integrating linear-complexity self-attention (FAVOR+) and exponentially dilated convolutional modules yields efficient, accurate speech enhancement on long and challenging utterances. The architecture enables practical, scalable, and trainable masking in both non-causal and iterative configurations while maintaining competitive or superior SI-SNRi relative to state-of-the-art convolutional baselines.

Limitations of random feature approximation in FAVOR+ (blurring, non-injectivity) have motivated the exploration of structured state-space models such as Hydra, which, while slightly more parameter-intensive, resolve these issues and advance the empirical performance frontier for generative SE architectures. The modularity of the design allows rapid substitution and scaling, setting the stage for further hybridization of efficient global operators in both speech and broader sequence modeling tasks.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Dilated FAVOR Conformer (DF-Conformer).