Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces (2306.00245v2)

Published 31 May 2023 in cs.LG, cs.CL, cs.CV, and cs.HC

Abstract: Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available. These input representations have been often coupled with custom, task-specific action spaces. This paper focuses on creating agents that interact with the digital world using the same conceptual interface that humans commonly use -- via pixel-based screenshots and a generic action space corresponding to keyboard and mouse actions. Building upon recent progress in pixel-based pretraining, we show, for the first time, that it is possible for such agents to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based instruction following tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Plow: A collaborative task learning agent. In AAAI Conference on Artificial Intelligence, 2007.
  2. Thinking fast and slow with deep learning and tree search. Advances in neural information processing systems, 30, 2017.
  3. Learning to understand goal specifications by modelling reward. In International Conference on Learning Representations, 2018.
  4. Video pretraining (VPT): Learning to act by watching unlabeled online videos. Advances in Neural Information Processing Systems, 35:24639–24654, 2022.
  5. Reading between the lines: Learning to map high-level instructions to commands. In Annual Meeting of the Association for Computational Linguistics, 2010.
  6. Interactive mobile app navigation with uncertain or under-specified natural language commands. arXiv preprint arXiv:2202.02312, 2022.
  7. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
  8. Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In Computers and Games, 2006.
  9. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  10. Vision-language models as success detectors. arXiv preprint arXiv:2303.07280, 2023.
  11. Instruction-finetuned foundation models for multimodal web navigation. In Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023a.
  12. Instruction-finetuned foundation models for multimodal web navigation. In First Workshop on Multimodal Representation Learning at ICLR, 2023b.
  13. Learning to navigate the web. arXiv preprint arXiv:1812.09195, 2018.
  14. Understanding HTML with large language models. arXiv preprint 2210.03945, 2022.
  15. A data-driven approach for learning to control computers. In International Conference on Machine Learning, pages 9466–9482. PMLR, 2022.
  16. Dom-q-net: Grounded rl on structured language. In International Conference on Learning Representations, 2019.
  17. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023.
  18. Pix2struct: Screenshot parsing as pretraining for visual language understanding. In International Conference on Machine Learning, pages 18893–18912. PMLR, 2023.
  19. Spotlight: Mobile ui understanding using vision-language models with a focus. In The Eleventh International Conference on Learning Representations, 2022.
  20. Mapping natural language instructions to mobile ui action sequences. arXiv preprint arXiv:2005.03776, 2020a.
  21. Widget captioning: Generating natural language description for mobile user interface elements. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5495–5510, Online, November 2020b. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.443. URL https://aclanthology.org/2020.emnlp-main.443.
  22. Reinforcement learning on web interfaces using workflow-guided exploration. In International Conference on Learning Representations (ICLR), 2018. URL https://arxiv.org/abs/1802.08802.
  23. Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015.
  24. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
  25. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1–67, 2020. URL https://arxiv.org/abs/1910.10683.
  26. Proximal policy optimization algorithms. ArXiv, abs/1707.06347, 2017.
  27. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596–4604. PMLR, 2018.
  28. World of bits: An open-domain platform for web-based agents. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3135–3144. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/shi17a.html.
  29. Mastering the game of go without human knowledge. nature, 550(7676):354–359, 2017.
  30. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  31. Screen2Words: Automatic mobile UI summarization with multimodal learning. In The 34th Annual ACM Symposium on User Interface Software and Technology, UIST ’21, page 498–510, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450386357. doi: 10.1145/3472749.3474765. URL https://doi.org/10.1145/3472749.3474765.
  32. Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256, 2004.
  33. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
  34. Webshop: Towards scalable real-world web interaction with grounded language agents. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=R9KnuFlvnU.
  35. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023.
  36. A visual medium for programmatic control of interactive applications. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 199–206, 1999.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Peter Shaw (23 papers)
  2. Mandar Joshi (24 papers)
  3. James Cohan (5 papers)
  4. Jonathan Berant (107 papers)
  5. Panupong Pasupat (27 papers)
  6. Hexiang Hu (48 papers)
  7. Urvashi Khandelwal (12 papers)
  8. Kenton Lee (40 papers)
  9. Kristina Toutanova (31 papers)
Citations (38)

Summary

From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces

The paper "From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces" explores the domain of developing agents capable of interacting with graphical user interfaces (GUIs) by utilizing pixel-based data and a generic action space. Traditionally, work in this area has relied heavily on structured text representations such as HTML and DOM trees, which aren't always available or aligned with the visual inputs. Instead, this research frames the task environment using pixel-based screenshots alongside generic interaction methods that humans typically employ—clicking and keyboard inputs—eschewing the dependence on parsed UI structure.

Architecture: Pix2Act

The research introduces Pix2Act, an innovative model based on the Pix2Struct architecture, which employs a visual transformer encoder and a text transformer decoder to interpret GUI state from pixel inputs. Pix2Act is distinguished by its reliance solely on visual representations of GUIs, thereby bypassing the need for structured textual data that might not always be present, such as in cases of web applications with extensive scripting or sandboxed environments.

Pix2Act borrows its foundational capabilities from Pix2Struct's pre-training, which focuses on mapping screenshots to structured HTML representations. This pre-training step empowers Pix2Act to recognize interface layouts, interpret natural language embedded in GUIs, and execute suitable interactions through mouse and keyboard, paralleling how human users might engage with digital interfaces.

Numerical Results and Performance

Markedly, Pix2Act outperforms preceding models like CC-Net by operating in a pixel-only input setting. On the MiniWob++ benchmark—a collection of tasks that simulate web-based GUI interactions—Pix2Act surpasses the performance of both human crowdworkers and models that leverage structured inputs. In numerical terms, the model shows a near quadruple improvement in task scores over previous methods when avoiding the use of DOM data. This advancement signifies a notable step forward, showcasing that agents reliant strictly on pixel-based observations can indeed match or surpass those utilizing more conventional data forms.

The paper also highlights the significant contribution of Pix2Struct’s pre-training, as illustrated by the substantial performance drop-off in behavioral cloning scenarios when this step is omitted. When both behavioral cloning and reinforcement learning were applied, Pix2Act demonstrated high performance levels, further accentuated by the use of tree search methods to refine the policy iteratively.

Implications and Speculative Future Directions

From a practical perspective, Pix2Act sets a precedent for developing digital agents capable of automated task handling within environments where structured texts are inaccessible, potentially enhancing digital accessibility and automation. Theoretically, this paper contributes to a broader understanding of how vision-centric AI systems can be designed to interpret and act upon complex visual information similarly to human operators.

Future research trajectories could extend the findings of Pix2Act across a wider range of settings, potentially evolving into more generalizable AI systems that can handle increasingly intricate interactions with diverse types of GUIs. Scaling up the pre-training regimes or incorporating multimodal learning frameworks may further draw connections between pixel-based learning and insights traditionally sourced from LLMs, thus marrying the domains of NLP and computer vision more closely.

Moreover, refining policy improvement techniques like tree search or exploring alternative reinforcement learning methodologies could expand the agent's adaptability and resilience across varied digital environments. Overall, the developments reported in this paper herald an interesting avenue of research that aligns machine interaction closer with human capabilities, ultimately expanding the boundaries of AI interfaces.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com