Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Determinantal Point Process Attention Over Grid Cell Code Supports Out of Distribution Generalization (2305.18417v3)

Published 28 May 2023 in cs.LG and q-bio.NC

Abstract: Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies. However, these still fall short of, and therefore fail to provide insight into how the brain supports strong forms of generalization of which humans are capable. One such case is out-of-distribution (OOD) generalization-successful performance on test examples that lie outside the distribution of the training set. Here, we identify properties of processing in the brain that may contribute to this ability. We describe a two-part algorithm that draws on specific features of neural computation to achieve OOD generalization, and provide a proof of concept by evaluating performance on two challenging cognitive tasks. First we draw on the fact that the mammalian brain represents metric spaces using grid cell code (e.g., in the entorhinal cortex): abstract representations of relational structure, organized in recurring motifs that cover the representational space. Second, we propose an attentional mechanism that operates over the grid cell code using Determinantal Point Process (DPP), that we call DPP attention (DPP-A) -- a transformation that ensures maximum sparseness in the coverage of that space. We show that a loss function that combines standard task-optimized error with DPP-A can exploit the recurring motifs in the grid cell code, and can be integrated with common architectures to achieve strong OOD generalization performance on analogy and arithmetic tasks. This provides both an interpretation of how the grid cell code in the mammalian brain may contribute to generalization performance, and at the same time a potential means for improving such capabilities in artificial neural networks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (75)
  1. Mapping of a non-spatial dimension by the hippocampal–entorhinal circuit. Nature, 543(7647):719–722, 2017.
  2. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  3. Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705):429–433, 2018.
  4. Grid-like neural representations support olfactory navigation of a two-dimensional odor space. Neuron, 102(5):1066–1075, 2019.
  5. Measuring abstract reasoning in neural networks. In International Conference on Machine Learning, pp. 511–520. PMLR, 2018.
  6. The boundary vector cell model of place cell firing and spatial memory. Reviews in the Neurosciences, 17(1-2):71–98, 2006.
  7. Experience-dependent rescaling of entorhinal grids. Nature Neuroscience, 10(6):682–684, 2007.
  8. A computational model of visual recognition memory via grid cells. Current Biology, 29(6):979–990, 2019.
  9. Biologically-plausible determinant maximization neural networks for blind separation of correlated sources. Advances in Neural Information Processing Systems, 35:13704–13717, 2022.
  10. Reduction of theta rhythm dissociates grid cell spatial periodicity from directional tuning. Science, 332(6029):595–599, 2011.
  11. On the control of control: The role of dopamine in regulating prefrontal function and working memory. Attention and Performance, 18:712–737, 2000.
  12. Using grid cells for navigation. Neuron, 87(3):507–520, 2015.
  13. High-capacity flexible hippocampal associative and episodic memory enabled by prestructured”spatial”representations. BioRxiv, pp.  2023–11, 2023.
  14. Fast greedy map inference for determinantal point process to improve recommendation diversity. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp.  5627–5638, 2018.
  15. Rostrolateral prefrontal cortex involvement in relational integration during reasoning. Neuroimage, 14(5):1136–1149, 2001.
  16. Organizing conceptual knowledge in humans with a gridlike code. Science, 352(6292):1464–1468, 2016.
  17. Emergence of grid-like representations by training recurrent neural networks to perform spatial localization. arXiv preprint arXiv:1803.07770, 2018.
  18. Peter Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4):613–624, 1993.
  19. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  20. Evidence for grid cells in a human memory network. Nature, 463(7281):657–661, 2010.
  21. Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis. Elife, 5:e10094, 2016.
  22. A biologically inspired hierarchical goal directed navigation model. Journal of Physiology-Paris, 108(1):28–37, 2014.
  23. Interactions between frontal cortex and basal ganglia in working memory: a computational model. Cognitive, Affective, & Behavioral Neuroscience, 1:137–160, 2001.
  24. Determinantal point processes for memory and structured inference. In CogSci, 2020.
  25. Extracting and utilizing abstract, structured representations for analogy. In CogSci, pp.  1766–1772, 2019.
  26. A theoretically grounded application of dropout in recurrent neural networks. Advances in Neural Information Processing Systems, 29:1019–1027, 2016.
  27. Near-optimal map inference for determinantal point processes. In Advances in Neural Information Processing Systems, pp. 2744–2752. Citeseer, 2012.
  28. Grid cells use hcn1 channels for spatial scaling. Cell, 147(5):1159–1170, 2011.
  29. Diverse sequential subset selection for supervised video summarization. Advances in Neural Information Processing Systems, 27:2069–2077, 2014.
  30. Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052):801–806, 2005.
  31. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  770–778, 2016.
  32. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pp.  2961–2969, 2017.
  33. Learning to make analogies by contrasting abstract relational structure. arXiv preprint arXiv:1902.00120, 2019.
  34. Neuronal representation of environmental boundaries in egocentric coordinates. Nature Communications, 10(1):2772, 2019.
  35. Lstm can solve hard long time lag problems. Advances in Neural Information Processing Systems, pp. 473–479, 1997.
  36. Keith J Holyoak. Analogy and relational reasoning. The Oxford Handbook of Thinking and Reasoning, pp.  234–259, 2012.
  37. The temporal context model in spatial navigation and relational learning: toward a common explanation of medial temporal lobe function across domains. Psychological Review, 112(1):75, 2005.
  38. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448–456. PMLR, 2015.
  39. Amirhossein Kazemnejad. Transformer architecture: The positional encoding. Kazemnejad’s blog, 2019.
  40. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  41. A neurocomputational system for relational reasoning. Trends in Cognitive Sciences, 16(7):373–381, 2012.
  42. An exact algorithm for maximum entropy sampling. Operations Research, 43(4):684–691, 1995.
  43. A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems, pp. 950–957, 1992.
  44. Determinantal point processes for machine learning. arXiv preprint arXiv:1207.6083, 2012.
  45. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International Conference on Machine Learning, pp. 2873–2882. PMLR, 2018.
  46. Probabilistic analogical mapping with semantic relation networks. Psychological Review, 2022.
  47. Odile Macchi. The coincidence approach to stochastic point processes. Advances in Applied Probability, 7(1):83–122, 1975.
  48. Diversity networks: Neural network compression using determinantal point processes. arXiv preprint arXiv:1511.05077, 2015.
  49. Dppnet: Approximating determinantal point processes with deep networks. arXiv preprint arXiv:1901.02051, 2019.
  50. Optimal population codes for space: grid cells outperform place cells. Neural Computation, 24(9):2280–2317, 2012.
  51. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 102(3):419, 1995.
  52. Compositional sequence generation in the entorhinal–hippocampal system. Entropy, 24(12):1791, 2022.
  53. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
  54. Place cells, grid cells, and memory. Cold Spring Harbor Perspectives in Biology, 7(2):a021808, 2015.
  55. Automatic differentiation in pytorch. 2017.
  56. Multi-document summarization with determinantal point process attention. Journal of Artificial Intelligence Research, 71:371–399, 2021.
  57. Approximate multiplication in young children prior to multiplication instruction. Journal of Experimental Child Psychology, 207:105116, 2021.
  58. A model for analogical reasoning. Cognitive Psychology, 5(1):1–28, 1973.
  59. Analysing mathematical reasoning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.
  60. The expected value of control: an integrative theory of anterior cingulate cortex function. Neuron, 79(2):217–240, 2013.
  61. Mastering the game of go without human knowledge. Nature, 550(7676):354–359, 2017.
  62. A unified theory for the computational and mechanistic origins of grid cells. Neuron, 2022.
  63. Grid cells generate an analog error-correcting code for singularly precise neural computation. Nature Neuroscience, 14(10):1330–1337, 2011.
  64. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
  65. The hippocampus as a predictive map. Nature Neuroscience, 20(11):1643–1653, 2017.
  66. The entorhinal grid map is discretized. Nature, 492(7427):72–78, 2012.
  67. Structure learning and the posterior parietal cortex. Progress in Neurobiology, 184:101717, 2020.
  68. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017.
  69. A system for relational reasoning in human prefrontal cortex. Psychological Science, 10(2):119–125, 1999.
  70. Learning representations that support extrapolation. In International Conference on Machine Learning, pp. 10136–10146. PMLR, 2020.
  71. Zero-shot visual reasoning through probabilistic analogical mapping. Nature Communications, 14(1):5144, 2023.
  72. A principle of economy predicts the functional architecture of grid cells. Elife, 4:e08362, 2015.
  73. The tolman-eichenbaum machine: Unifying space and relational memory through generalization in the hippocampal formation. Cell, 183(5):1249–1263, 2020.
  74. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
  75. Determinantal point processes for mini-batch diversification. arXiv preprint arXiv:1705.00607, 2017.
Citations (1)

Summary

We haven't generated a summary for this paper yet.