Dice Question Streamline Icon: https://streamlinehq.com

Biological plausibility of transformer blocks

Determine whether transformer block architectures can be realized in a biologically plausible manner, identifying learning mechanisms and circuit-level implementations consistent with neural constraints that could support the necessary credit assignment without backpropagation through time.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper discusses HRM’s training approach as analogous to diffusion/consistency models, which avoid backpropagation through time and are argued to be more biologically plausible for recurrent computation. However, the authors note that deep feedforward models still require biologically plausible credit assignment mechanisms.

Against this backdrop, the authors explicitly state that the biological plausibility of transformer blocks remains unresolved, highlighting a foundational open question about whether transformer architectures can be reconciled with plausible neural learning rules and circuitry.

References

The biological plausibility of transformer blocks, on the other hand, remains an open question.

Hierarchical Reasoning Model: A Critical Supplementary Material (2510.00355 - Ge et al., 30 Sep 2025) in Section 3.2 (BPTT vs. Diffusion: a Biological Perspective)