Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-View Masked World Models for Visual Robotic Manipulation (2302.02408v2)

Published 5 Feb 2023 in cs.RO, cs.CV, and cs.LG

Abstract: Visual robotic manipulation research and applications often use multiple cameras, or views, to better perceive the world. How else can we utilize the richness of multi-view data? In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation. Specifically, we train a multi-view masked autoencoder which reconstructs pixels of randomly masked viewpoints and then learn a world model operating on the representations from the autoencoder. We demonstrate the effectiveness of our method in a range of scenarios, including multi-view control and single-view control with auxiliary cameras for representation learning. We also show that the multi-view masked autoencoder trained with multiple randomized viewpoints enables training a policy with strong viewpoint randomization and transferring the policy to solve real-robot tasks without camera calibration and an adaptation procedure. Video demonstrations are available at: https://sites.google.com/view/mv-mwm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Younggyo Seo (25 papers)
  2. Junsu Kim (16 papers)
  3. Stephen James (42 papers)
  4. Kimin Lee (69 papers)
  5. Jinwoo Shin (196 papers)
  6. Pieter Abbeel (372 papers)
Citations (43)