Papers
Topics
Authors
Recent
2000 character limit reached

Computation Cost Attacks

Updated 4 December 2025
  • Computation cost attacks are adversarial strategies that deliberately inflate resource usage—such as CPU cycles, memory, and inference time—to impair system performance.
  • They span multiple domains including PoW networks, machine learning, 3D vision, and quantum cryptography, with specific cost models guiding each attack method.
  • Defensive measures focus on adaptive, resource-competitive protocols and computational cost optimization to mitigate service disruption and economic exhaustion.

Computation cost attacks are a class of adversarial strategies designed to deliberately increase the computational resource consumption—such as CPU cycles, inference time, training time, memory footprint, or number of queries—of a target system. They span multiple domains, including proof-of-work (PoW) networks, black-box adversarial machine learning, federated and distributed systems, large-scale 3D vision, and quantum cryptography. While many security analyses focus on attack success rate or information leakage, computation cost attacks explicitly target operational cost and service availability, with objectives ranging from denial-of-service (DoS) to economic exhaustion.

1. Formal Models and Taxonomy

Computation cost attacks are parameterized by the definition of cost within each environment:

  • PoW Networks: The adversary aims to increase the honest nodes' CPU work needed for Sybil defense or committee election, measured as units of PoW puzzle solutions per time. The attacker can control their own fraction of computational power and strategize joins and departures so as to maximize system-wide honest work despite fixed or slow-growing population (Gupta et al., 2019, Gupta et al., 2017).
  • Attack Trees and System Design: In attack tree analysis, "cost" may denote any quantifiable effort needed to realize an attack path, e.g., number of brute-force guesses or required resources. Cost-damage attack optimization seeks the maximal achievable damage for a given cost constraint (Lopuhaä-Zwakenberg et al., 2023, Vigo et al., 2016).
  • Adversarial ML (Bounded-Budget): Attacks are restricted by query counts, number of gradient steps, or FLOPs. The attacker chooses distortions or attack actions to maximize desired outcomes (e.g., model misbehavior), subject to a tight budget (Hou et al., 30 Oct 2025, Salmani et al., 7 Jun 2025).
  • Resource Inflation in 3D Vision: Attacks such as Poison-splat inject input data perturbations to drive adaptive complexity pipelines—like 3D Gaussian Splatting—toward maximal memory and compute, possibly exceeding hardware limits and causing DoS (Lu et al., 10 Oct 2024).
  • Distributed and Multi-agent Systems: Each attacker-resource spent on an agent comes with agent-specific costs or budgets, leading to attack optimization under multi-dimensional constraints (Lu et al., 2023).
  • Quantum Attacks: Cost is modeled not as query complexity but as total logical-qubit-cycles or time–area product, revealing orders-of-magnitude gaps between theory and practical attack feasibility (Amy et al., 2016).

A common theme is the combinatorial or bi-level optimization of attacker actions to maximize system stress or achieve goal within resource limits.

2. Classic Proof-of-Work (PoW) and Resource-Competitive Attacks

In dynamic distributed systems, especially those relying on entrance and periodic purge puzzles for Sybil resistance, computation cost attacks attempt to force honest participants to pay maximal work regardless of actual system churn. The system model is:

  • Dynamic set of virtual identities (IDs), each controlled by an honest participant or by the adversary.
  • Time divided into rounds; each ID may solve a "1-round puzzle" at fixed unit cost.
  • The adversary can control a fraction α\alpha of computational resources; over any round where everyone works, the adversary solves an α\alpha-fraction of puzzles.
  • Computation cost attack: The adversary manipulates its join pattern, balancing injection and attrition of Sybils to force costly purges and re-computation by honest nodes.

Classical PoW protocols that continuously require all nodes to solve puzzles are replaced by resource-competitive designs. Notable contributions include:

  • CCom: Triggers purges only after significant membership changes. Honest cost per unit time is O(T+JG)O(T + J^G), with TT the adversary’s spend-rate and JGJ^G good join rate.
  • GMCom: Adapts puzzle hardness and purge frequency to attacker behavior, proving optimal honest spend-rate O(JG+T(JG+1))O(J^G + \sqrt{T(J^G + 1)}); this matches formal lower bounds for any purge-based PoW protocol (Gupta et al., 2019).

Empirical results confirm actual honest CPU costs scale sublinearly with TT, in contrast to prior work (Gupta et al., 2017).

3. Computation Cost Attacks in Machine Learning and AI

Adversarial machine learning research recognizes that attack success must be measured not only by fooling rate, but also by the computational cost to achieve a certain strength:

  • Fine-Grained Layer-wise Cost Control: The Spiking-PGD attack models an LL-layer, TT-step network, and allows the attacker to skip recomputation of specific layer activations when their update is small, matching attack efficacy to a strict computational budget. The key quantity is the indicator It,l∈{0,1}I_{t,l}\in\{0,1\} controlling update at layer ll, step tt, with total compute C({It,l})=∑t,lct,lIt,l≤BC(\{I_{t,l}\}) = \sum_{t,l} c_{t,l} I_{t,l} \leq B. Surrogate gradient injection ensures performance at low budget (Hou et al., 30 Oct 2025).
  • Decision-based Attacks with Asymmetric Cost: Black-box attacks with class-dependent query costs (e.g., moderation-sensitive outputs) employ Asymmetric Search (non-uniform splitting of search interval) and Asymmetric Gradient Estimation (importance-weighted sampling) to minimize expensive queries for the same adversarial effect (Salmani et al., 7 Jun 2025).
  • Privacy Attacks under Efficiency Constraints: Gradient inversion attacks in federated learning feature early-stopping techniques—threshold-based, plateau-based, and hybrid—which terminate optimization when further computation yields diminishing returns, formally reducing per-sample cost from Nâ‹…fN \cdot f to Iâ‹…fI \cdot f with negligible drop in attack success rate (Tabassum et al., 15 Apr 2024).

Notably, in all these domains, cost-aware attacks expand the practical threat surface to settings with real-world compute and financial constraints.

4. Training and Inference Stage Attacks on Adaptive ML Pipelines

Computation cost threats are particularly severe for adaptive-complexity pipelines whose resource consumption is data-dependent.

  • 3D Gaussian Splatting (3DGS): In Poison-splat, adversaries inject high–total-variation (TV) multi-view images that cause the 3DGS densification routine to spawn an excessive number of Gaussian primitives, linearly increasing GPU memory and compute and in many cases causing out-of-memory termination (DoS). The attack solves a bi-level optimization to maximize computation metric C(G∗)\mathcal{C}(\mathcal{G}^*) (e.g., number of Gaussians, peak memory) subject to small ℓ∞\ell_\infty perturbations for stealth (Lu et al., 10 Oct 2024, Li et al., 27 Nov 2025).
  • LLM Inference Cost via Bit Flips: BitHydra introduces "bit-flip inference cost attacks" which, instead of prompt-based exploitation, corrupt the output embedding for the <EOS> token via gradient-guided fault injection (e.g. Rowhammer). This disables early termination for all users and achieving 100% max-length output with just 3–7 targeted bit flips, depending on quantization (Yan et al., 22 May 2025). Model-level attacks thus exhibit economies of scale far surpassing self-targeted prompt attacks.

Defenses are non-trivial: naive hard caps or smoothing destroy model fidelity or utility, while adversarially trained purification pipelines such as RemedyGS can selectively recover benign inputs, restoring safe resource allocation (Li et al., 27 Nov 2025).

5. Optimization and Quantification Frameworks

Systematic analysis of computation cost attacks hinges on formal cost models and optimization strategies:

  • Attack Trees with Cost Constraints: Cost-damage attack trees model attacks as boolean activations of basic steps, with costs and damages specified per node. Determining the Pareto-optimal tradeoff between budget and possible damage is NP-complete, but can be solved by ILP for DAGs or dynamic programming on trees (Lopuhaä-Zwakenberg et al., 2023).
  • Composer Models: The Quality Calculus framework encodes all minimal sets of resources (e.g., secret keys, channels) that enable an attack; cost assignment and optimization are performed via SMT solving to find the minimal-cost attack set (Vigo et al., 2016).
  • Multi-agent Dynamic Programming: In distributed environments with attacker-victim–pairwise cost parameters and per-agent budgets, the optimal attack is realized via nested resource allocation and Bellman recursion, ensuring global optimality within cost constraints (Lu et al., 2023).

Quantum cryptanalysis necessitates abandoning naïve query count in favor of time–area metrics (logical-qubit-cycles), which accurately expose the impracticality of preimage attacks even with quadratic speedup due to enormous real resource demands (Amy et al., 2016).

6. Defense Strategies and Practical Implications

Mitigations for computation cost attacks require precise alignment between threat model and system architecture:

  • Resource-Competitive Protocols: Adaptive PoW and committee-enforced purges ensure honest cost scales only slowly with attacker spend, attaining provable optimality (Gupta et al., 2019, Gupta et al., 2017).
  • Detection and Purification of Input Attacks: For adaptive vision models, black-box convolutional detectors and adversarially trained purifier networks can filter and repair poisoned views, restoring operation without undue training penalty (Li et al., 27 Nov 2025).
  • Asymmetric Cost-Aware Design: Interface-level protections in ML APIs include penalizing costly queries or employing search and estimation algorithms explicitly designed for cost minimization.
  • Cost-Security Optimization: Security–cost trade-offs (e.g., in timing/padding countermeasures) are modeled as explicit optimization problems, and practical systems may operate at non-extremal points in the secure–fast curve (0807.3879).

A plausible implication is that as more ML and distributed systems deploy mechanisms with data- or context-dependent cost scaling (e.g., adaptive primitive allocation, per-query pricing), both attacks and defenses will require fine-grained, model-specific formalizations of cost and new classes of resource-aware defense methodology.

7. Implications, Limitations, and Open Problems

Computation cost attacks demonstrate that traditional success metrics (accuracy, error rate, bitrate, etc.) are insufficient—systems must be designed and analyzed with resource scalability, adversarial robustness, and cost asymmetry in mind. While significant theoretical progress has been made in modeling, optimizing, and empirically defending against such threats, practical deployment often lags due to gaps between formal cost models and deployment realities (e.g., infrastructure heterogeneity, detection/mitigation lag, and economic coupling).

Open problems include:

  • Development of provably robust, adaptive-complexity systems immune to stealthy maximal-cost input perturbations.
  • Effective cost-aware detection and defense in the context of multi-modal or jointly optimized adversarial pipelines.
  • Generalization of resource-competitive defense principles beyond PoW and committee selection to encompass emerging paradigms in distributed, federated, and AI-native systems.

Computation cost attacks are thus a fundamental consideration in contemporary security and resource allocation research, with active investigation required at the intersection of theoretical models, system implementations, and adversarial intelligence.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Computation Cost Attacks.