Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ScreenAgent: A Vision Language Model-driven Computer Control Agent (2402.07945v1)

Published 9 Feb 2024 in cs.HC, cs.AI, and cs.CV
ScreenAgent: A Vision Language Model-driven Computer Control Agent

Abstract: Existing LLMs (LLM) can invoke a variety of tools and APIs to complete complex tasks. The computer, as the most powerful and universal tool, could potentially be controlled directly by a trained LLM agent. Powered by the computer, we can hopefully build a more generalized agent to assist humans in various daily digital works. In this paper, we construct an environment for a Vision LLM (VLM) agent to interact with a real computer screen. Within this environment, the agent can observe screenshots and manipulate the Graphics User Interface (GUI) by outputting mouse and keyboard actions. We also design an automated control pipeline that includes planning, acting, and reflecting phases, guiding the agent to continuously interact with the environment and complete multi-step tasks. Additionally, we construct the ScreenAgent Dataset, which collects screenshots and action sequences when completing a variety of daily computer tasks. Finally, we trained a model, ScreenAgent, which achieved computer control capabilities comparable to GPT-4V and demonstrated more precise UI positioning capabilities. Our attempts could inspire further research on building a generalist LLM agent. The code is available at \url{https://github.com/niuzaisheng/ScreenAgent}.

Overview of "ScreenAgent: A Vision LLM-driven Computer Control Agent"

The paper under discussion introduces ScreenAgent, a Vision LLM (VLM)-driven agent designed to interact with computer systems via direct manipulation of graphical user interfaces (GUIs) using mouse and keyboard controls. The research advances the capabilities of existing LLMs by bridging the gap between AI agents and real-world computer interaction, aiming to automate a wide array of digital tasks.

Environment and Methodology

The authors constructed a comprehensive environment that enables the VLM agent to interact with real computer screens. This involves observing screenshots and executing mouse and keyboard actions through a VNC protocol. The agent operates sequentially through planning, acting, and reflecting phases, allowing for continuous interaction and the completion of multi-step tasks. The pipeline facilitates the agent's adaptation to real-time changes in the computer environment.

To enable effective training and evaluation, the authors present the ScreenAgent Dataset, which comprises screenshots and action sequences for various routine computer tasks. This dataset serves as a foundational resource for agent training, aiming to enhance both its decision-making and action execution capabilities.

Numerical Evaluation

ScreenAgent was trained against a baseline provided by GPT-4V and other state-of-the-art VLMs such as LLaVA-1.5 and CogAgent. The agent demonstrated comparable performance to GPT-4V in most aspects, notably surpassing it in precise UI positioning accuracy. This was achieved through a fine-tuning process that leveraged a mixture of objective detection and web interaction datasets, adjusted for sequence alignment in action execution.

Contributions and Implications

The major contributions of the research include:

  • An RL-driven environment that simulates comprehensive agent interactions with real computer systems.
  • The development of a structured control pipeline to enable continuous interaction through planned, reflective actions.
  • Creation of the ScreenAgent Dataset, extending the bounds of computer interaction tasks and bilingual support, covering 39 subcategories spread across 6 themes.

This paper's approach to integrating VLMs with practical computer interfaces opens up possibilities for advancing autonomous AI agents that can perform routine digital tasks effectively, extending the current utilities of LLMs beyond mere text processing.

Future Directions

The implications of this research suggest multiple pathways for future developments:

  • Improving the precision of VLM agents in interfacing with diverse operating systems and GUIs.
  • Expanding dataset scope to include more complex and varied digital interactions.
  • Enhancing reflection mechanisms to mimic human cognitive processes more closely, potentially increasing task success rates.

These aspects will likely drive further investigation into creating robust, versatile AI agents capable of automating a wide array of tasks in digital environments, improving both productivity and accessibility.

In conclusion, the paper on ScreenAgent marks significant progress in VLM-driven computer control and sets the stage for developing more generalist AI agents with practical applications in everyday digital workspaces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Runliang Niu (5 papers)
  2. Jindong Li (33 papers)
  3. Shiqi Wang (162 papers)
  4. Yali Fu (5 papers)
  5. Xiyu Hu (1 paper)
  6. Xueyuan Leng (1 paper)
  7. He Kong (28 papers)
  8. Yi Chang (150 papers)
  9. Qi Wang (560 papers)
Citations (19)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com