Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Behavior Representations Through Multi-Timescale Bootstrapping (2206.07041v1)

Published 14 Jun 2022 in cs.LG

Abstract: Natural behavior consists of dynamics that are both unpredictable, can switch suddenly, and unfold over many different timescales. While some success has been found in building representations of behavior under constrained or simplified task-based conditions, many of these models cannot be applied to free and naturalistic settings due to the fact that they assume a single scale of temporal dynamics. In this work, we introduce Bootstrap Across Multiple Scales (BAMS), a multi-scale representation learning model for behavior: we combine a pooling module that aggregates features extracted over encoders with different temporal receptive fields, and design a set of latent objectives to bootstrap the representations in each respective space to encourage disentanglement across different timescales. We first apply our method on a dataset of quadrupeds navigating in different terrain types, and show that our model captures the temporal complexity of behavior. We then apply our method to the MABe 2022 Multi-agent behavior challenge, where our model ranks 3rd overall and 1st on two subtasks, and show the importance of incorporating multi-timescales when analyzing behavior.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Mehdi Azabou (15 papers)
  2. Michael Mendelson (3 papers)
  3. Maks Sorokin (8 papers)
  4. Shantanu Thakoor (15 papers)
  5. Nauman Ahad (8 papers)
  6. Carolina Urzay (3 papers)
  7. Eva L. Dyer (26 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.