BadRobot: Manipulating Embodied LLMs in the Physical World (2407.20242v3)
Abstract: Embodied AI represents systems where AI is integrated into physical entities, enabling them to perceive and interact with their surroundings. LLM, which exhibits powerful language understanding abilities, has been extensively employed in embodied AI by facilitating sophisticated task planning. However, a critical safety issue remains overlooked: could these embodied LLMs perpetrate harmful behaviors? In response, we introduce BadRobot, a novel attack paradigm aiming to make embodied LLMs violate safety and ethical constraints through typical voice-based user-system interactions. Specifically, three vulnerabilities are exploited to achieve this type of attack: (i) manipulation of LLMs within robotic systems, (ii) misalignment between linguistic outputs and physical actions, and (iii) unintentional hazardous behaviors caused by world knowledge's flaws. Furthermore, we construct a benchmark of various malicious physical action queries to evaluate BadRobot's attack performance. Based on this benchmark, extensive experiments against existing prominent embodied LLM frameworks (e.g., Voxposer, Code as Policies, and ProgPrompt) demonstrate the effectiveness of our BadRobot. Warning: This paper contains harmful AI-generated language and aggressive actions.
- Hangtao Zhang (9 papers)
- Chenyu Zhu (5 papers)
- Xianlong Wang (25 papers)
- Ziqi Zhou (46 papers)
- Shengshan Hu (53 papers)
- Leo Yu Zhang (69 papers)
- Changgan Yin (1 paper)
- Minghui Li (51 papers)
- Lulu Xue (8 papers)
- Yichen Wang (61 papers)
- Aishan Liu (72 papers)
- Peijin Guo (6 papers)