Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VUT: Versatile UI Transformer for Multi-Modal Multi-Task User Interface Modeling (2112.05692v1)

Published 10 Dec 2021 in cs.CV, cs.AI, cs.HC, and cs.LG

Abstract: User interface modeling is inherently multimodal, which involves several distinct types of data: images, structures and language. The tasks are also diverse, including object detection, language generation and grounding. In this paper, we present VUT, a Versatile UI Transformer that takes multimodal input and simultaneously accomplishes 5 distinct tasks with the same model. Our model consists of a multimodal Transformer encoder that jointly encodes UI images and structures, and performs UI object detection when the UI structures are absent in the input. Our model also consists of an auto-regressive Transformer model that encodes the language input and decodes output, for both question-answering and command grounding with respect to the UI. Our experiments show that for most of the tasks, when trained jointly for multi-tasks, VUT substantially reduces the number of models and footprints needed for performing multiple tasks, while achieving accuracy exceeding or on par with baseline models trained for each individual task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yang Li (1140 papers)
  2. Gang Li (579 papers)
  3. Xin Zhou (319 papers)
  4. Mostafa Dehghani (64 papers)
  5. Alexey Gritsenko (16 papers)
Citations (31)