Dynamical similarity analysis can identify compositional dynamics developing in RNNs (2410.24070v4)
Abstract: Methods for analyzing representations in neural systems have become a popular tool in both neuroscience and mechanistic interpretability. Having measures to compare how similar activations of neurons are across conditions, architectures, and species, gives us a scalable way of learning how information is transformed within different neural networks. In contrast to this trend, recent investigations have revealed how some metrics can respond to spurious signals and hence give misleading results. To identify the most reliable metric and understand how measures could be improved, it is going to be important to identify specific test cases which can serve as benchmarks. Here we propose that the phenomena of compositional learning in recurrent neural networks (RNNs) allows us to build a test case for dynamical representation alignment metrics. By implementing this case, we show it enables us to test whether metrics can identify representations which gradually develop throughout learning and probe whether representations identified by metrics are relevant to computations executed by networks. By building both an attractor- and RNN-based test case, we show that the new Dynamical Similarity Analysis (DSA) is more noise robust and identifies behaviorally relevant representations more reliably than prior metrics (Procrustes, CKA). We also show how test cases can be used beyond evaluating metrics to study new architectures. Specifically, results from applying DSA to modern (Mamba) state space models, suggest that, in contrast to RNNs, these models may not exhibit changes to their recurrent dynamics due to their expressiveness. Overall, by developing test cases, we show DSA's exceptional ability to detect compositional dynamical motifs, thereby enhancing our understanding of how computations unfold in RNNs.
- How aligned are different alignment metrics? arXiv preprint arXiv:2407.07530, 2024.
- Neural correlations, population coding and computation. Nature reviews neuroscience, 7(5):358–366, 2006.
- Three aspects of representation in neuroscience. Trends in cognitive sciences, 26(11):942–958, 2022.
- Two views on the cognitive brain. Nature Reviews Neuroscience, 22(6):359–371, 2021.
- Dform: Diffeomorphic vector field alignment for assessing dynamics across learned models. arXiv preprint arXiv:2402.09735, 2024.
- Differentiable optimization of similarity scores between models and brains. arXiv preprint arXiv:2407.07059, 2024.
- Aligning model and macaque inferior temporal cortex representations improves model-to-human behavioral alignment and adversarial robustness. bioRxiv, pp. 2022–07, 2022.
- Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. Nature Neuroscience, 27(7):1349–1363, 2024.
- The role of population structure in computations through neural dynamics. Nature neuroscience, 25(6):783–794, 2022.
- Obstacles to inferring mechanistic similarity using representational similarity analysis. bioRxiv, 2023. doi: 10.1101/2022.04.05.487135. URL https://www.biorxiv.org/content/early/2023/05/01/2022.04.05.487135.
- Propofol anesthesia destabilizes neural dynamics across cortex. Neuron, 112(16):2799–2813, 2024.
- Why neurons mix: high dimensionality for higher cognition. Current opinion in neurobiology, 37:66–74, 2016.
- Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.
- Multilevel interpretability of artificial neural networks: Leveraging framework and methods from neuroscience. arXiv preprint arXiv:2408.12664, 2024.
- The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks. Journal of vision, 23(7):4–4, 2023.
- Nonlinear mixed selectivity supports reliable neural computation. PLoS computational biology, 16(2):e1007544, 2020.
- Dynamic representations in networked neural systems. Nature Neuroscience, 23(8):908–917, 2020.
- Cortical activity in the null space: permitting preparation without movement. Nature neuroscience, 17(3):440–448, 2014.
- Recurrence is required to capture the representational dynamics of the human visual system. Proceedings of the National Academy of Sciences, 116(43):21854–21863, 2019.
- Resi: A comprehensive benchmark for representational similarity measures. arXiv preprint arXiv:2408.00531, 2024.
- Similarity of neural network representations revisited. In International conference on machine learning, pp. 3519–3529. PMLR, 2019.
- A unifying perspective on neural manifolds and circuits for cognition. Nature Reviews Neuroscience, 24(6):363–377, 2023.
- Visualizing representational dynamics with multidimensional scaling alignment. arXiv preprint arXiv:1906.09264, 2019.
- Testing methods of neural systems understanding. Cognitive Systems Research, 82:101156, 2023.
- Reservoir computing approaches to recurrent neural network training. Computer science review, 3(3):127–149, 2009.
- Universality and individuality in neural dynamics across large populations of recurrent networks. Advances in neural information processing systems, 32, 2019.
- Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474):78–84, 2013.
- Neurogym: An open resource for developing and sharing neuroscience tasks. PsyArXiv Preprints, 2022.
- Beyond geometry: Comparing the temporal structure of computation in neural circuits with dynamical similarity analysis. Advances in Neural Information Processing Systems, 36, 2024.
- Do vision transformers see like convolutional neural networks? Advances in neural information processing systems, 34:12116–12128, 2021.
- Identifying equivalent training dynamics, 2024. URL https://arxiv.org/abs/2302.09160.
- Conclusions about neural network to brain alignment are profoundly impacted by the similarity measure. bioRxiv, 2024. doi: 10.1101/2024.08.07.607035. URL https://www.biorxiv.org/content/early/2024/08/09/2024.08.07.607035.
- Getting aligned on representational alignment. arXiv preprint arXiv:2310.13018, 2023.
- David Sussillo. Neural circuits as computational dynamical systems. Current opinion in neurobiology, 25:156–163, 2014.
- Alexandre Torres-Leguet. mamba.py: A simple, hackable and efficient mamba implementation in pure pytorch and mlx, 2024. URL https://github.com/alxndrTL/mamba.py. Software.
- Mixed selectivity: Cellular computations for complexity. Neuron, 2024.
- Position paper: An inner interpretability framework for ai inspired by lessons from cognitive neuroscience. arXiv preprint arXiv:2406.01352, 2024.
- Computation through neural population dynamics. Annual review of neuroscience, 43(1):249–275, 2020.
- Unsupervised discovery of demixed, low-dimensional neural dynamics across multiple timescales through tensor component analysis. Neuron, 98(6):1099–1115, 2018.
- Generalized shape metrics on neural representations. Advances in Neural Information Processing Systems, 34:4738–4750, 2021.
- Task representations in neural networks trained to perform many cognitive tasks. Nature neuroscience, 22(2):297–306, 2019.
- Radical flexibility of neural representation in frontoparietal cortex and the challenge of linking it to behaviour. Current Opinion in Behavioral Sciences, 57:101392, 2024.