Align large autonomous agents with robust safety and control
Develop and validate alignment and control methodologies that ensure large autonomous agents built on large language model backends—such as web-acting systems exemplified by OpenAI’s Operator using the o3 reasoning model—satisfy robust safety and control requirements during complex, multi‑step interactions with software and online environments.
Sponsor
References
These findings underscore both the rapid progress and the continuing open problems in aligning large autonomous agents with robust safety and control requirements.
— Noosemia: toward a Cognitive and Phenomenological Account of Intentionality Attribution in Human-Generative AI Interaction
(2508.02622 - Santis et al., 4 Aug 2025) in Section 6.4: AI agents and the Digital Lebenswelt