Embodiment in multimodal large language models (2510.13845v1)
Abstract: Multimodal LLMs (MLLMs) have demonstrated extraordinary progress in bridging textual and visual inputs. However, MLLMs still face challenges in situated physical and social interactions in sensorally rich, multimodal and real-world settings where the embodied experience of the living organism is essential. We posit that next frontiers for MLLM development require incorporating both internal and external embodiment -- modeling not only external interactions with the world, but also internal states and drives. Here, we describe mechanisms of internal and external embodiment in humans and relate these to current advances in MLLMs in early stages of aligning to human representations. Our dual-embodied framework proposes to model interactions between these forms of embodiment in MLLMs to bridge the gap between multimodal data and world experience.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.