Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training a Vision Language Model as Smartphone Assistant (2404.08755v1)

Published 12 Apr 2024 in cs.LG, cs.AI, cs.CV, and cs.HC

Abstract: Addressing the challenge of a digital assistant capable of executing a wide array of user tasks, our research focuses on the realm of instruction-based mobile device control. We leverage recent advancements in LLMs and present a visual LLM (VLM) that can fulfill diverse tasks on mobile devices. Our model functions by interacting solely with the user interface (UI). It uses the visual input from the device screen and mimics human-like interactions, encompassing gestures such as tapping and swiping. This generality in the input and output space allows our agent to interact with any application on the device. Unlike previous methods, our model operates not only on a single screen image but on vision-language sentences created from sequences of past screenshots along with corresponding actions. Evaluating our method on the challenging Android in the Wild benchmark demonstrates its promising efficacy and potential.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nicolai Dorka (8 papers)
  2. Janusz Marecki (6 papers)
  3. Ammar Anwar (2 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets