Boolean formula evaluation without padding for hybrid models

Determine whether fixed-depth unpadded hybrid language models that combine self-attention layers with Gated DeltaNet (with negative eigenvalues) layers can express boolean formula evaluation, i.e., whether an expressivity result analogous to the padded setting—where such hybrids capture FO-uniform NC1 and thus can evaluate boolean formulas—also holds without padding tokens, under standard complexity assumptions (e.g., TC0 ≠ NC1).

Background

With polynomial padding tokens, the paper shows hybrid models (attention + GDN with negative eigenvalues) can recognize all of FO-uniform NC1, yielding that they can evaluate boolean formulas (an NC1-complete problem), while padded transformers remain in TC0.

The authors explicitly point out that it remains unknown whether a comparable boolean formula evaluation result can be proven for unpadded hybrid models.

References

It is an open question whether a similar boolean formula evaluation result might be obtained for unpadded hybrid models.

Olmo Hybrid: From Theory to Practice and Back  (2604.03444 - Merrill et al., 3 Apr 2026) in Section 3.4 (Expressive Power of Padded Hybrid Models)