Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 174 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration (1905.09275v2)

Published 22 May 2019 in cs.LG and cs.AI

Abstract: Data efficiency and robustness to task-irrelevant perturbations are long-standing challenges for deep reinforcement learning algorithms. Here we introduce a modular approach to addressing these challenges in a continuous control environment, without using hand-crafted or supervised information. Our Curious Object-Based seaRch Agent (COBRA) uses task-free intrinsically motivated exploration and unsupervised learning to build object-based models of its environment and action space. Subsequently, it can learn a variety of tasks through model-based search in very few steps and excel on structured hold-out tests of policy robustness.

Citations (114)

Summary

  • The paper introduces COBRA, a Model-Based Reinforcement Learning agent that achieves high data efficiency using unsupervised object discovery and curiosity-driven exploration.
  • COBRA utilizes unsupervised object-centric representations and a slot-structured transition model for robust learning and generalization in varied environments.
  • COBRA significantly improves data efficiency and robustness against task-irrelevant changes like distractor objects compared to baseline methods.

Overview of COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration

The paper introduces the Curious Object-Based seaRch Agent (COBRA), a novel approach in Model-Based Reinforcement Learning (MBRL) designed to enhance data efficiency and improve robustness via unsupervised object discovery and curiosity-driven exploration. The objective is twofold: to develop agent capabilities without hand-crafted supervision and to foster a policy's robustness in changing contexts. COBRA achieves these goals through a composition of object-oriented models and task-free exploration, leveraging curiosity as a central enabler.

Technical Contributions

  1. Unsupervised Object-Centric Representation: COBRA utilizes a vision module based on MONet, which auto-regressively decomposes scenes into entity-based latent spaces without supervision. This module affords the agent a structured way to perceive and interpret its environment grounded in objects rather than pixels.
  2. Slot-Structured Transition Model: By learning action-conditioned dynamics over slot-structured object representations, COBRA efficiently predicts transitions from raw pixel observations, facilitating a model that generalizes well to diverse tasks.
  3. Curiosity-Driven Exploration: The integration of a curiosity-motivated exploration policy facilitates the discovery of environment dynamics. By adversarial training against the transition model, the exploration policy incentivizes the agent to perform actions that maximize prediction error, fostering an intrinsic motivation to understand and learn the environment's complexity.

The agent is trained in two phases: an exploration phase where model-free discovery occurs, followed by a task-specific phase where model-based techniques are employed for quick learning of tasks using pre-generated object representations.

Empirical Results

COBRA demonstrates significant improvements in data efficiency over traditional RL methods like Maximum a Posteriori Policy Optimization (MPO), requiring fewer environment interactions to achieve proficiency. It learns effectively with only a small number of reward-driven steps, showing marked superiority in efficiency compared to baseline models.

COBRA's policy also exhibits strong robustness to task-irrelevant perturbations. The composability of object-centric representations leads to stable performance when encountering variant numbers of distractor objects or alterations to non-task-relevant features like objects' shapes or colors, reflecting COBRA's capacity for generalization.

Implications and Future Prospects

The implications of COBRA span both theoretical and practical dimensions. Theoretically, COBRA extends the discourse on object-centric MBRL, highlighting the feasibility of learning without task-specific rewards. Practically, COBRA's data efficiency suggests it could incur reduced computational costs and resource demands in RL agent training, especially crucial for complex, real-world applications where data collection is expensive.

Future developments could enhance COBRA's scalability and adaptability to environments with more complex physics or multi-agent interactions by integrating more sophisticated representations and planning strategies. Additionally, exploring COBRA's potential in real-world robotics or dynamic environments where quick adaptation is crucial could prove valuable.

Conclusion

COBRA presents a notable advancement in MBRL by synergizing unsupervised learning, curiosity-driven exploration, and object-centric perception to yield a highly data-efficient learning paradigm. Its contributions to robustness and efficiency set the stage for future research aimed at further improving the practicality and capability of autonomous agents in intricate settings.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Youtube Logo Streamline Icon: https://streamlinehq.com