Papers
Topics
Authors
Recent
Search
2000 character limit reached

When Actions Go Off-Task: Detecting and Correcting Misaligned Actions in Computer-Use Agents

Published 9 Feb 2026 in cs.CL | (2602.08995v1)

Abstract: Computer-use agents (CUAs) have made tremendous progress in the past year, yet they still frequently produce misaligned actions that deviate from the user's original intent. Such misaligned actions may arise from external attacks (e.g., indirect prompt injection) or from internal limitations (e.g., erroneous reasoning). They not only expose CUAs to safety risks, but also degrade task efficiency and reliability. This work makes the first effort to define and study misaligned action detection in CUAs, with comprehensive coverage of both externally induced and internally arising misaligned actions. We further identify three common categories in real-world CUA deployment and construct MisActBench, a benchmark of realistic trajectories with human-annotated, action-level alignment labels. Moreover, we propose DeAction, a practical and universal guardrail that detects misaligned actions before execution and iteratively corrects them through structured feedback. DeAction outperforms all existing baselines across offline and online evaluations with moderate latency overhead: (1) On MisActBench, it outperforms baselines by over 15% absolute in F1 score; (2) In online evaluation, it reduces attack success rate by over 90% under adversarial settings while preserving or even improving task success rate in benign environments.

Summary

  • The paper presents an intent-centric framework that categorizes misaligned actions into malicious instruction following, harmful unintended behavior, and task-irrelevant actions.
  • It introduces MISACTBENCH, a comprehensive benchmark comprising 558 trajectories and 2,264 annotated actions to rigorously evaluate detection performance.
  • The proposed DEACTION guardrail demonstrates over 90% reduction in attack success rates and corrects 78% of flagged actions, ensuring reliable real-time corrections.

Intent-Centric Detection and Correction of Misaligned Actions in Computer-Use Agents

Problem Motivation and Formulation

This paper addresses the challenge of action misalignment in computer-use agents (CUAs)—agents that automate digital workflows by interacting with graphical user interfaces. Despite rapid advancements, CUAs frequently perform actions that deviate from the user's authentic intent, either due to external adversarial attacks (e.g., indirect prompt injection) or internal failures (e.g., erroneous reasoning). Such misalignments pose substantial reliability and safety risks, including unauthorized resource modification, stalled progress, and diminished user trust. The prevailing practice of safety-centric guardrails and trajectory-level risk annotation fails to robustly cover these failures. The paper advocates an intent-centric paradigm, formally defining action alignment and categorizing misaligned actions into three classes: (1) Malicious Instruction Following, (2) Harmful Unintended Behavior, and (3) Other Task-Irrelevant Behavior. The research objective is to detect and correct misaligned actions before execution, independent of traditional policy violation enumeration.

MISACTBENCH: Comprehensive Benchmark for Action Misalignment

To enable rigorous evaluation, the authors introduce MISACTBENCH, a benchmark consisting of 558 interaction trajectories and 2,264 human-annotated actions. This benchmark represents both externally induced and internally arising misaligned actions through a hybrid trajectory collection pipeline. Attacks are sampled from adversarial benchmarks including OS-Harm, DoomArena, RedTeamCUA, and RiOSWorld, covering indirect prompt injection and diverse attack vectors. Intrinsic misalignments are synthesized by injecting plausible unintended subgoals into benign OSWorld trajectories via automated LLM-guided contextualization and merging. All trajectories undergo manual validation for realism and execution consistency, and annotation achieves high inter-annotator agreement (Fleiss' Kappa 0.84), reflecting robust labeling fidelity. MISACTBENCH uniquely provides action-level multimodal alignment labels, enabling granular assessment of pre-execution guardrails.

DEACTION: Universal Runtime Guardrail

The paper proposes DEACTION, a model-agnostic runtime guardrail that proactively intercepts proposed actions to assess their alignment with user intent. DEACTION operates via a two-stage detection pipeline:

  • Stage 1—Fast Check: Lightweight screening for routine, clearly aligned actions using an LLM constrained to concise output, minimizing latency.
  • Stage 2—Systematic Analysis: Structured, multi-component reasoning leveraging a compact narrative summary of interaction history. The four analysis steps include injection analysis, action understanding, outcome prediction, and misalignment analysis, collectively producing human-readable rationales.

Actions failing Fast Check are subjected to Systematic Analysis, facilitating efficient resource allocation and maintaining moderate per-step latency. History dependencies are managed through narrative summarization, which compresses transition data into brief, token-efficient natural language context.

Upon detection of misalignment, DEACTION iteratively initiates structured feedback loops, guiding the agent to revise proposals until an aligned action is achieved or retry limits are reached. This correction-by-feedback mechanism actively recovers agent progress while avoiding the pitfalls of outright action blocking.

Numerical Results and Empirical Insights

Offline (MISACTBENCH)

DEACTION demonstrates robust numerical superiority across multiple backbone LLMs. F1 scores exceed 80% on strong models (e.g., GPT-5.1), and precision is up to 89.9%, significantly outperforming Task Shield and InferAct baselines which suffer from over-flagging (precision ~50%) despite high recall. Malicious instruction following is the most detectable category (recall ~90%), while harmful unintended behavior and task-irrelevant actions exhibit lower detection and classification scores. Ablation studies validate narrative summaries as the optimal history representation, outperforming raw screenshots and action-only context in both detection efficacy and token efficiency, and reducing latency from 18s to 11s per step.

Online (Interactive Environments)

In end-to-end adversarial evaluation (RedTeamCUA), DEACTION reduces attack success rate (ASR) by over 90% across all tested CUAs. The task utility under attack (UA) is also improved, and task success rate (SR) in benign environments (OSWorld) is maintained or improved, reflecting no adverse impact on normal workflows. DEACTION is Pareto-optimal in latency (median per-step guardrail time ~7s) versus protection efficacy, outperforming DSP, PromptArmor, and output guardrails that either degrade benign performance or incur excessive latency due to aggressive correction triggering.

Runtime analysis confirms that DEACTION’s two-stage routing enables direct approval of 45% of actions via Fast Check, and 78% of flagged misaligned actions are ultimately corrected, with 62% fixed in a single revision. Cost-efficiency studies show that downsizing Fast Check and Summarizer to smaller LLMs reduces inference cost by 65% without degrading detection performance; in contrast, Systematic Analysis requires strong reasoning capability to sustain high accuracy.

Error Analysis and Limitations

Representative errors arise from shallow Fast Check reasoning fooled by benign-appearing adversarial instructions, injection analysis overlooking well-disguised environmental cues, inaccurate action grounding, flawed outcome prediction, and overly narrow intent-level reasoning causing false positives. Malicious instruction following is well-addressed due to explicit environmental cues; internally arising and non-malicious task-irrelevant misalignments remain challenging due to their subtlety and semantic complexity. Further progress requires improved grounding, robust injection detection, and refined intent modeling.

Implications and Future Directions

Practically, DEACTION provides a universal, plug-and-play guardrail, applicable across heterogeneous CUA architectures without access to internal agent parameters. It achieves strong protection and correction with moderate computational overhead, facilitating deployment in real-world OS and web environments where action-level reliability and user trust are critical. The benchmark, MISACTBENCH, establishes compatible evaluation protocols for action misalignment, supporting future advances in intent-centric agent alignment.

Theoretically, the definition and categorization of misaligned actions operationalize action-level intent alignment distinct from policy violation detection, opening avenues for research in agent reasoning, fine-grained intent modeling, and long-horizon task consistency. Extending this paradigm to other forms of agentic misalignment and grounding it in richer semantic workflows will further mature the domain.

Conclusion

The paper establishes a formal framework and evaluation protocol for detecting and correcting misaligned actions in computer-use agents, distinguishing intent-centric alignment from traditional safety-centric approaches. MISACTBENCH and DEACTION collectively advance state-of-the-art in runtime agent guardrails, achieving substantial gains in action-level reliability, adversarial robustness, and efficient alignment correction. The research is positioned to inform both practical deployment and theoretical modeling of agent action alignment in increasingly autonomous digital environments (2602.08995).

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 9 tweets with 49 likes about this paper.