Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ILuvUI: Instruction-tuned LangUage-Vision modeling of UIs from Machine Conversations (2310.04869v1)

Published 7 Oct 2023 in cs.HC, cs.AI, cs.CL, and cs.CV

Abstract: Multimodal Vision-LLMs (VLMs) enable powerful applications from their fused understanding of images and language, but many perform poorly on UI tasks due to the lack of UI training data. In this paper, we adapt a recipe for generating paired text-image training data for VLMs to the UI domain by combining existing pixel-based methods with a LLM. Unlike prior art, our method requires no human-provided annotations, and it can be applied to any dataset of UI screenshots. We generate a dataset of 335K conversational examples paired with UIs that cover Q&A, UI descriptions, and planning, and use it to fine-tune a conversational VLM for UI tasks. To assess the performance of our model, we benchmark it on UI element detection tasks, evaluate response quality, and showcase its applicability to multi-step UI navigation and planning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yue Jiang (104 papers)
  2. Eldon Schoop (10 papers)
  3. Amanda Swearngin (14 papers)
  4. Jeffrey Nichols (25 papers)
Citations (10)