Dice Question Streamline Icon: https://streamlinehq.com

Identify aggregation sub-circuits for counting in transformer language models

Identify and characterize the additional transformer sub-circuits in Llama‑70B that implement the aggregation necessary for the Counting filter‑reduce task (i.e., computing the number of items in a presented collection that satisfy a specified predicate), beyond the shared filtering sub‑circuit implemented by filter heads. Determine the specific attention heads, MLP blocks, and interactions responsible for aggregation and establish their causal contribution to counting behavior.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper shows that a sparse set of attention heads, termed filter heads, encode predicate representations that generalize across several selection tasks (SelectOne, SelectFirst, SelectLast, etc.). These filter heads are critical for selection tasks, as demonstrated by high causality and severe performance drops upon ablation.

However, when examining a Counting task that requires aggregating items satisfying a predicate, the authors find an asymmetric transfer pattern: heads from selection tasks do not generalize to Counting, whereas heads identified from Counting partially generalize to selection tasks. This suggests Counting shares the filtering sub-circuit but additionally relies on yet-undiscovered mechanisms for aggregation.

References

Counting shows an interesting asymmetric pattern: while Select* heads fail on the Counting task, Counting heads show partial generalization to the Select* tasks --- suggesting that Counting does share some common sub-circuit with Select* tasks, while having a more complex mechanism, likely involving additional circuits for specialized aggregation, that we have not yet identified.

LLMs Process Lists With General Filter Heads (2510.26784 - Sharma et al., 30 Oct 2025) in Section “Portability/Generalizability Across Filter-Reduce Operations”