Papers
Topics
Authors
Recent
Search
2000 character limit reached

Magma: A Foundation Model for Multimodal AI Agents

Published 18 Feb 2025 in cs.CV, cs.AI, cs.HC, cs.LG, and cs.RO | (2502.13130v1)

Abstract: We present Magma, a foundation model that serves multimodal AI agentic tasks in both the digital and physical worlds. Magma is a significant extension of vision-language (VL) models in that it not only retains the VL understanding ability (verbal intelligence) of the latter, but is also equipped with the ability to plan and act in the visual-spatial world (spatial-temporal intelligence) and complete agentic tasks ranging from UI navigation to robot manipulation. To endow the agentic capabilities, Magma is pretrained on large amounts of heterogeneous datasets spanning from images, videos to robotics data, where the actionable visual objects (e.g., clickable buttons in GUI) in images are labeled by Set-of-Mark (SoM) for action grounding, and the object movements (e.g., the trace of human hands or robotic arms) in videos are labeled by Trace-of-Mark (ToM) for action planning. Extensive experiments show that SoM and ToM reach great synergy and facilitate the acquisition of spatial-temporal intelligence for our Magma model, which is fundamental to a wide range of tasks as shown in Fig.1. In particular, Magma creates new state-of-the-art results on UI navigation and robotic manipulation tasks, outperforming previous models that are specifically tailored to these tasks. On image and video-related multimodal tasks, Magma also compares favorably to popular large multimodal models that are trained on much larger datasets. We make our model and code public for reproducibility at https://microsoft.github.io/Magma.

Summary

  • The paper introduces Magma, a unified foundation model for multimodal AI agents integrating vision-language and spatial-temporal reasoning.
  • Magma employs surrogate tasks like Set-of-Mark and Trace-of-Mark to unify heterogeneous datasets into an action prediction framework.
  • Evaluations show Magma achieves state-of-the-art performance on UI navigation and robotic manipulation tasks with significant quantitative improvements.

This paper introduces Magma, a unified foundation model that integrates vision-language understanding with spatial-temporal reasoning to perform diverse agentic tasks in both digital and physical environments.

  • It formulates surrogate tasksโ€”Set-of-Mark (SoM) for action grounding and Trace-of-Mark (ToM) for action planningโ€”to seamlessly convert heterogeneous datasets into a unified action prediction framework.
  • The model jointly processes 2D UI screenshots, 7-DoF robotic manipulations, and instructional videos using a convolutional vision encoder and autoregressive LLM, effectively bridging verbal and spatial token spaces.
  • Extensive evaluations demonstrate that Magma achieves state-of-the-art performance on UI navigation and robotic manipulation tasks, with significant quantitative improvements (e.g., nearly doubling success rates over baselines) and robust spatial reasoning on benchmarks such as BLINK and VisualWebBench.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 19 tweets with 90 likes about this paper.