Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
Gemini 2.5 Pro
GPT-5
GPT-4o
DeepSeek R1 via Azure
2000 character limit reached

Rewriting the Budget: A General Framework for Black-Box Attacks Under Cost Asymmetry (2506.06933v1)

Published 7 Jun 2025 in cs.LG, cs.AI, cs.CR, and cs.CV

Abstract: Traditional decision-based black-box adversarial attacks on image classifiers aim to generate adversarial examples by slightly modifying input images while keeping the number of queries low, where each query involves sending an input to the model and observing its output. Most existing methods assume that all queries have equal cost. However, in practice, queries may incur asymmetric costs; for example, in content moderation systems, certain output classes may trigger additional review, enforcement, or penalties, making them more costly than others. While prior work has considered such asymmetric cost settings, effective algorithms for this scenario remain underdeveloped. In this paper, we propose a general framework for decision-based attacks under asymmetric query costs, which we refer to as asymmetric black-box attacks. We modify two core components of existing attacks: the search strategy and the gradient estimation process. Specifically, we propose Asymmetric Search (AS), a more conservative variant of binary search that reduces reliance on high-cost queries, and Asymmetric Gradient Estimation (AGREST), which shifts the sampling distribution to favor low-cost queries. We design efficient algorithms that minimize total attack cost by balancing different query types, in contrast to earlier methods such as stealthy attacks that focus only on limiting expensive (high-cost) queries. Our method can be integrated into a range of existing black-box attacks with minimal changes. We perform both theoretical analysis and empirical evaluation on standard image classification benchmarks. Across various cost regimes, our method consistently achieves lower total query cost and smaller perturbations than existing approaches, with improvements of up to 40% in some settings.

Summary

  • The paper presents a novel framework that integrates Asymmetric Search (AS) and Asymmetric Gradient Estimation (AGREST) to lower query costs in adversarial attacks.
  • It optimizes query strategies by adaptively sampling regions with lower costs, thereby addressing the limitations of conventional binary search and HSJA.
  • Experimental results demonstrate up to a 40% reduction in query expenses, highlighting potential applications across various AI models and real-world systems.

Rewriting the Budget: A General Framework for Black-Box Attacks Under Cost Asymmetry

The academic paper titled "Rewriting the Budget: A General Framework for Black-Box Attacks Under Cost Asymmetry" introduces a prominent advancement in decision-based black-box adversarial attack methodology, particularly addressing the significant and practical issue of asymmetric query costs. Authored by Mahdi Salmani, Alireza Abdollahpoorrostam, and Seyed-Mohsen Moosavi-Dezfooli, this paper introduces novel techniques to optimize adversarial attacks in scenarios where queries have varying costs, which is often overlooked in existing research.

Typical decision-based black-box attacks, like those initiated by boundary attacks or refined by methods such as HopSkipJumpAttack (HSJA) and GeoDA, primarily seek to generate minimal adversarial perturbations in a stealthy manner, focusing on reducing the number of queries. However, these methods generally assume uniform query costs. This assumption poses inefficiencies, especially in real-world applications where certain query outcomes incur higher costs due to potential additional reviews, penalties, or enforcement actions by automated systems, such as the flagged content in content moderation systems.

The paper proposes a general framework dubbed "asymmetric attacks" designed to consider and optimally handle arbitrary query cost ratios, thereby providing versatility across diverse adversarial attack scenarios. This framework replaces binary search with "Asymmetric Search" (AS) and modifies gradient estimation in HSJA with "Asymmetric Gradient Estimation" (AGREST).

AS is devised as a refined search method that strategically minimizes the total query cost rather than simply minimizing the number of queries, as standard binary search does. By altering the search space division to factor in query costs, AS efficiently reduces the reliance on high-cost queries.

AGREST represents a pivotal advancement in gradient estimation, preserving the efficacy of the well-established HSJA method. It adaptively shifts the sampling distribution used in gradient approximation toward regions favoring low-cost queries, thus decreasing the likelihood of expensive queries.

The paper's proposed methods consistently outperform existing strategies, including stealthy attacks, across various cost asymmetry levels. Numerical results demonstrate substantial improvements with up to 40% reduction in query costs in some scenarios. These methods, integrated into existing attack frameworks, showcase significant reductions in the total query cost, allowing for more efficient adversarial testing.

A critical implication of these developments is the enhanced practicality and efficiency of adversarial attacks, specifically in settings requiring the navigation of cost penalties, such as image classification models deployed on large platforms. Furthermore, the theoretical insights provided by the paper pave the way for future exploration within AI, particularly as frameworks need to adapt to diverse operational environments with intricate cost structures.

The potential extension of this framework to other AI models, such as vision-LLMs and LLMs, represents a promising direction. The adaptation challenges necessitated by the inherent nature of text data in LLMs pose an intriguing frontier for research. This paper demonstrates how modifying core attack components without compromising effectiveness can lead to substantial gains in efficiency and applicability, highlighting the importance of considering practical real-world constraints in adversarial robustness studies.

Future research may explore multi-class cost settings, where each class has a distinct query cost, vastly broadening the application scope of asymmetric attacks. Additionally, exploring their roles in models tuned for multilingual settings or embedded in broader AI ecosystems could significantly amplify their impact across different domains.

Overall, this comprehensive approach to asymmetry in query costs enriches the adversarial attack landscape, providing a robust methodological pivot away from the simplistic, cost-unaware frameworks widely used in the field.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.