Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised learning of video representations from a child's perspective (2402.00300v3)

Published 1 Feb 2024 in cs.CV, cs.LG, cs.NE, and q-bio.NC

Abstract: Children learn powerful internal models of the world around them from a few years of egocentric visual experience. Can such internal models be learned from a child's visual experience with highly generic learning algorithms or do they require strong inductive biases? Recent advances in collecting large-scale, longitudinal, developmentally realistic video datasets and generic self-supervised learning (SSL) algorithms are allowing us to begin to tackle this nature vs. nurture question. However, existing work typically focuses on image-based SSL algorithms and visual capabilities that can be learned from static images (e.g. object recognition), thus ignoring temporal aspects of the world. To close this gap, here we train self-supervised video models on longitudinal, egocentric headcam recordings collected from a child over a two year period in their early development (6-31 months). The resulting models are highly effective at facilitating the learning of action concepts from a small number of labeled examples; they have favorable data size scaling properties; and they display emergent video interpolation capabilities. Video models also learn more accurate and more robust object representations than image-based models trained with the exact same data. These results suggest that important temporal aspects of a child's internal model of the world may be learnable from their visual experience using highly generic learning algorithms and without strong inductive biases.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. (2018). Toddler-inspired visual object learning. Advances in Neural Information Processing Systems, 31.
  2. (2021). Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294.
  3. (2020). An image is worth 16x16 words: transformers for image recognition at scale. In International Conference on Learning Representations.
  4. (1996). Rethinking innateness: A connectionist perspective on development (Vol. 10). MIT Press.
  5. (2022). Masked autoencoders as spatiotemporal learners. Advances in Neural Information Processing Systems, 35, 35946–35958.
  6. (2017). Wordbank: An open repository for developmental vocabulary data. Journal of Child Language, 44(3), 677–694.
  7. (2020). Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11), 665–673.
  8. (2021). Partial success in closing the gap between human and machine vision. Advances in Neural Information Processing Systems, 34, 23885–23899.
  9. (2017). The ”something something” video database for learning and evaluating visual common sense. In Proceedings of the ieee international conference on computer vision (pp. 5842–5850).
  10. (2022). Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 16000–16009).
  11. Orhan, A. E.  (2023). Scaling may be all you need for achieving human-level object recognition capacity with human-like visual experience. arXiv preprint arXiv:2308.03712.
  12. (2020). Self-supervised learning through the eyes of a child. Advances in Neural Information Processing Systems, 33.
  13. (2023). Learning high-level visual representations from a child’s perspective without strong inductive biases. arXiv preprint arXiv:2305.15372.
  14. (2023). Self-supervised video pretraining yields robust and more human-aligned visual representations. Advances in Neural Information Processing Systems, 37.
  15. (2022). Intuitive physics learning in a deep-learning model inspired by developmental psychology. Nature Human Behaviour, 6(9), 1257–1267.
  16. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115, 211–252.
  17. (2020). A short note on the Kinetics-700-2020 human action dataset. arXiv preprint arXiv:2010.10864.
  18. (2019). Modeling expectation violation in intuitive physics with coarse probabilistic object representations. Advances in Neural Information Processing Systems, 32.
  19. Spelke, E.  (1994). Initial knowledge: Six suggestions. Cognition, 50(1-3), 431–445.
  20. (2021). SAYCam: A large, longitudinal audiovisual dataset recorded from the infant’s perspective. Open Mind, 5, 20–29.
  21. (2022). Lessons from infant learning for unsupervised machine learning. Nature Machine Intelligence, 4(6), 510–520.
  22. (2022). How well do unsupervised learning algorithms model human real-time and life-long learning? Advances in Neural Information Processing Systems, 36.
  23. (2021). Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences, 118(3), e2014196118.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com