Dice Question Streamline Icon: https://streamlinehq.com

Generalization across graph topologies and scales

Ascertain whether the observed implicit in-weights reasoning and geometric memory in Transformer and Mamba models extend beyond path-star and tree-star graphs to other graph topologies and to graphs of different sizes.

Information Square Streamline Icon: https://streamlinehq.com

Background

The positive results in the paper are demonstrated primarily on path-star (and additionally tree-star) graphs, showing strong in-weights generalization on held-out leaves and large graphs. Whether this behavior persists on broader classes of graphs or at different scales is not yet established.

Clarifying this would define the scope of applicability of geometric memory in parametric models and inform the design of benchmarks and training protocols.

References

It is unclear how well this generalizes to other topologies, and to graphs of other sizes.

Deep sequence models tend to memorize geometrically; it is unclear why (2510.26745 - Noroozizadeh et al., 30 Oct 2025) in Section: Limitations (bullet 1)