Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning (2204.10374v1)

Published 21 Apr 2022 in cs.LG

Abstract: Hierarchical Reinforcement Learning (HRL) allows interactive agents to decompose complex problems into a hierarchy of sub-tasks. Higher-level tasks can invoke the solutions of lower-level tasks as if they were primitive actions. In this work, we study the utility of hierarchical decompositions for learning an appropriate way to interact with a complex interface. Specifically, we train HRL agents that can interface with applications in a simulated Android device. We introduce a Hierarchical Distributed Deep Reinforcement Learning architecture that learns (1) subtasks corresponding to simple finger gestures, and (2) how to combine these gestures to solve several Android tasks. Our approach relies on goal conditioning and can be used more generally to convert any base RL agent into an HRL agent. We use the AndroidEnv environment to evaluate our approach. For the experiments, the HRL agent uses a distributed version of the popular DQN algorithm to train different components of the hierarchy. While the native action space is completely intractable for simple DQN agents, our architecture can be used to establish an effective way to interact with different tasks, significantly improving the performance of the same DQN agent over different levels of abstraction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Gheorghe Comanici (8 papers)
  2. Amelia Glaese (14 papers)
  3. Anita Gergely (6 papers)
  4. Daniel Toyama (11 papers)
  5. Zafarali Ahmed (13 papers)
  6. Tyler Jackson (5 papers)
  7. Philippe Hamel (8 papers)
  8. Doina Precup (206 papers)
Citations (1)