From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces (2306.00245v2)
Abstract: Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available. These input representations have been often coupled with custom, task-specific action spaces. This paper focuses on creating agents that interact with the digital world using the same conceptual interface that humans commonly use -- via pixel-based screenshots and a generic action space corresponding to keyboard and mouse actions. Building upon recent progress in pixel-based pretraining, we show, for the first time, that it is possible for such agents to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based instruction following tasks.
- Plow: A collaborative task learning agent. In AAAI Conference on Artificial Intelligence, 2007.
- Thinking fast and slow with deep learning and tree search. Advances in neural information processing systems, 30, 2017.
- Learning to understand goal specifications by modelling reward. In International Conference on Learning Representations, 2018.
- Video pretraining (VPT): Learning to act by watching unlabeled online videos. Advances in Neural Information Processing Systems, 35:24639–24654, 2022.
- Reading between the lines: Learning to map high-level instructions to commands. In Annual Meeting of the Association for Computational Linguistics, 2010.
- Interactive mobile app navigation with uncertain or under-specified natural language commands. arXiv preprint arXiv:2202.02312, 2022.
- Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
- Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In Computers and Games, 2006.
- An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
- Vision-language models as success detectors. arXiv preprint arXiv:2303.07280, 2023.
- Instruction-finetuned foundation models for multimodal web navigation. In Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023a.
- Instruction-finetuned foundation models for multimodal web navigation. In First Workshop on Multimodal Representation Learning at ICLR, 2023b.
- Learning to navigate the web. arXiv preprint arXiv:1812.09195, 2018.
- Understanding HTML with large language models. arXiv preprint 2210.03945, 2022.
- A data-driven approach for learning to control computers. In International Conference on Machine Learning, pages 9466–9482. PMLR, 2022.
- Dom-q-net: Grounded rl on structured language. In International Conference on Learning Representations, 2019.
- Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023.
- Pix2struct: Screenshot parsing as pretraining for visual language understanding. In International Conference on Machine Learning, pages 18893–18912. PMLR, 2023.
- Spotlight: Mobile ui understanding using vision-language models with a focus. In The Eleventh International Conference on Learning Representations, 2022.
- Mapping natural language instructions to mobile ui action sequences. arXiv preprint arXiv:2005.03776, 2020a.
- Widget captioning: Generating natural language description for mobile user interface elements. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5495–5510, Online, November 2020b. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.443. URL https://aclanthology.org/2020.emnlp-main.443.
- Reinforcement learning on web interfaces using workflow-guided exploration. In International Conference on Learning Representations (ICLR), 2018. URL https://arxiv.org/abs/1802.08802.
- Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015.
- Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
- Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1–67, 2020. URL https://arxiv.org/abs/1910.10683.
- Proximal policy optimization algorithms. ArXiv, abs/1707.06347, 2017.
- Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596–4604. PMLR, 2018.
- World of bits: An open-domain platform for web-based agents. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3135–3144. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/shi17a.html.
- Mastering the game of go without human knowledge. nature, 550(7676):354–359, 2017.
- Attention is all you need. Advances in neural information processing systems, 30, 2017.
- Screen2Words: Automatic mobile UI summarization with multimodal learning. In The 34th Annual ACM Symposium on User Interface Software and Technology, UIST ’21, page 498–510, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450386357. doi: 10.1145/3472749.3474765. URL https://doi.org/10.1145/3472749.3474765.
- Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256, 2004.
- Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
- Webshop: Towards scalable real-world web interaction with grounded language agents. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=R9KnuFlvnU.
- ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023.
- A visual medium for programmatic control of interactive applications. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 199–206, 1999.
- Peter Shaw (23 papers)
- Mandar Joshi (24 papers)
- James Cohan (5 papers)
- Jonathan Berant (107 papers)
- Panupong Pasupat (27 papers)
- Hexiang Hu (48 papers)
- Urvashi Khandelwal (12 papers)
- Kenton Lee (40 papers)
- Kristina Toutanova (31 papers)