Adversarial Attack Transferability
- Adversarial attack transferability is the phenomenon where examples crafted for one model succeed in misleading other models without needing direct knowledge of their parameters.
- A unified optimization framework leverages gradient-based methods to create both evasion and poisoning attacks that exploit common decision boundary characteristics.
- Metrics such as input gradient size, gradient alignment, and loss landscape variability quantitatively assess transferability, guiding both attack strategies and defensive measures.
Adversarial attack transferability is the phenomenon whereby adversarial examples generated to deceive one machine learning model (the surrogate) also succeed in misleading another, potentially unknown, target model, even in the absence of direct knowledge about its parameters or training data. This property is central to the risk profile of deep learning systems in black-box settings, as it underpins the viability of attacks when the adversary lacks full access to the model. The phenomenon arises in both test-time evasion and training-time poisoning contexts, and encompasses a range of classifiers and data modalities. Transferability has become a focal point for both theoretical analysis and empirical investigation, informing attack strategies, defense mechanisms, and the evaluation of model robustness across architectures, datasets, and application scenarios.
1. Unified Optimization View of Transfer Attacks
Both evasion and poisoning attacks can be cast within a unifying optimization framework that treats the attack as a constrained maximization of an objective function (usually a loss) over allowable perturbations. For evasion, the adversary seeks a perturbed test input within a feasible set that increases the loss of the model (parameterized by ), subject to norm or box constraints:
For poisoning, a bilevel problem is formulated: \begin{align*} \max_{x'} &\ L(D_\text{val}, \theta*(x')) \ \text{subject to} &\ \theta*(x') = \arg\min_{\theta} L(D_\text{train} \cup {(x', y)}, \theta) \end{align*} Both cases can be approximately solved using gradient-based iterative methods (such as projected gradient ascent or its variants), encompassing a range of attacks including FGSM, PGD, Carlini & Wagner, momentum-based, or transformation-augmented techniques. This formulation reveals that transferability is rooted in shared properties of model loss landscapes, optimization routines, and the geometric structure of decision boundaries (Demontis et al., 2018).
2. Formalization and Metrics of Transferability
The transferability of an adversarial example generated from a surrogate model is quantified by the induced loss on the target model . A first-order Taylor expansion relates the increase in loss to the input gradient:
Three key metrics, derived from analytical and empirical considerations, govern transferability:
| Metric | Definition | Role in Transferability |
|---|---|---|
| Size of Input Gradients () | Target’s intrinsic vulnerability to perturbations | |
| Gradient Alignment () | Cosine similarity between surrogate and target gradients | Directional effectiveness, alignment increases success |
| Loss Landscape Variability () | Variance of surrogate loss across different data draws or configurations | Lower (smoother surrogate) correlates with better transferability |
Empirically, attacks aligned with the target gradient and operating in models with large typically show higher transfer success, while high leads to less stable and consequently less transferable adversarial directions (Demontis et al., 2018).
3. Factors Influencing Adversarial Transferability
Comprehensive analyses reveal transferability is determined by:
- Target Model Vulnerability (High ): Models with high complexity or weak regularization exhibit larger input gradients, making them more sensitive to input perturbations. These models are intrinsically more susceptible to both evasion and poisoning transfer attacks.
- Surrogate Model Complexity and Regularization: Surrogate models that are strongly regularized and exhibit smoother loss landscapes (lower ) generate adversarial examples with directions more robust to mismatches in model specifics. In contrast, attacks crafted against highly complex surrogates may not generalize well due to spurious high-variance gradients.
- Gradient Directionality (): The closer the direction of the attack gradient from the surrogate aligns with the direction of maximum loss increase in the target, the higher the transferability. This is captured quantitatively by the inner product or cosine similarity of the respective input gradients.
- Intrinsic Locality: Transferability is not uniform across the input space; only points where decision boundaries of surrogate and target models are sufficiently aligned will yield transferable examples (Katzir et al., 2021).
4. Theoretical Bounds and Empirical Evidence
Theoretical upper bounds derived using linear approximations demonstrate that the maximum attainable increase in loss on the target model is limited by the magnitude of its input gradient:
where is the perturbation budget. Extensive empirical evaluation across linear and nonlinear classifiers (logistic regression, SVMs, DNNs, ensembles) and diverse datasets (MNIST, DREBIN, LFW) confirms:
- High-complexity targets are reliably more vulnerable (higher test error under attack correlated with larger ).
- Regularized surrogates yield attacks that transfer more effectively, with alignment serving as the strongest predictor for attack success.
- Cross-architecture and cross-training transferability is weakened by dataset and architecture mismatches, but strong attacks (or those exploiting robust features) can still achieve non-trivial transfer rates under favorable conditions (Demontis et al., 2018, Barni et al., 2018).
5. Implications for Attack and Defense Strategies
The analysis provides precise guidance for both attackers and defenders:
- Attackers: Should prefer surrogate models with strong regularization and smooth loss landscapes to maximize and minimize , thereby enhancing the generalizability of crafted adversarial examples. Data and model augmentation or meta-learning methodologies that simulate diversity among surrogates (e.g., meta-surrogate models, learning-to-learn approaches) provide further transferability gains.
- Defenders: Can mitigate transferability risks by constraining complexity (practicing regularization) and monitoring input gradient norms, potentially diagnosing vulnerable configurations pre-deployment. Architectural or training set diversity—for example, in ensemble forensic systems—offers additional robustness against black-box transfer attacks, as demonstrated for CNN-based forensic image detectors (Barni et al., 2018).
- Diagnostics and Metrics: The metrics () offer actionable diagnostics for evaluating deployed models' potential vulnerability to transfer attacks and benchmarking new defenses or attack generation strategies.
6. Broader Challenges and Limitations
While the unified gradient-based framework and associated metrics explain much of observed transferability, significant limitations and open questions remain:
- Probabilistic Nature and Predictive Limitations: Transferability is inherently probabilistic and highly local. In practice, even models trained on the same architecture and data can exhibit large variances in transfer success due to stochastic training factors. This unpredictability severely limits confidence in transfer-based attacks where the cost of failure is high (Katzir et al., 2021).
- Non-Monotonic Data and Class Overlap Effects: In realistic scenarios with partial data or class overlap between attacker and victim (as in medical or malware applications), attack success does not monotonically correlate with the degree of overlap. Instead, results are dataset- and architecture-dependent, and adversarial training may sometimes inadvertently increase vulnerability by overfitting to specific data manifolds (Richards et al., 2021).
- Realistic Threat Models: Practical transferability is further reduced when surrogate and target models differ in training data source, architecture, and class label distributions. The DUMB attacker model shows that mismatches in any of these dimensions degrade transfer rates, calling into question the generalizability of results obtained under idealized laboratory conditions (Alecci et al., 2023).
- Overestimation in Prior Evaluations: Large-scale cross-architecture benchmarks indicate that transferability is often overestimated in traditional CNN-to-CNN studies and that achieving universal transfer across model types (CNNs, ViTs, SNNs, DyNNs) is extremely challenging (Yu et al., 2023).
7. Ongoing Research and Future Directions
The robust quantification of adversarial transferability continues to drive new methodologies and research agendas:
- Enhanced Attack Generation: Techniques incorporating meta-learning, residual or reverse perturbation, feature-momentum, or ensemble adaptive weighing (AdaEA) have been shown to further enhance transferability, especially across heterogeneous architectures and task families (Fang et al., 2021, Peng et al., 6 Aug 2025, Chen et al., 2023).
- Evaluation Protocols and Benchmarking: Advanced evaluation methodologies now include quantitative metrics that jointly account for attack success and perturbation distortion, ensemble and all-or-nothing transfer protocols, and cross-architecture model suites, yielding more stringent and reliable assessments of attack generalization (Maho et al., 2023, Yu et al., 2023).
- Distributional Perspectives: Recent work posits that explicitly manipulating the data distribution—either by pushing adversarial examples out of the in-distribution region or aligning with alternative class distributions—can lead to dramatic increases in transferability, revealing another dimension for attack and defense development (Zhu et al., 2022).
- Outstanding Challenges: The connection between loss landscape flatness and transferability is empirically supported but theoretically underexplored, and the role of minimal yet impactful geometric transformations (such as rotation or transpose) continues to expose architectural and pre-processing fragilities (Wan et al., 2 Mar 2025).
- Defensive Adaptation: The arms race is ongoing, with the challenge of creating architectures or training regimes that are robust not only to white-box but also to highly transferable, black-box adversarial examples.
In sum, adversarial attack transferability is a multifaceted property driven by loss landscape geometry, model complexity, regularization, and perturbation–decision boundary alignment. Although substantial progress has been made in formalizing, quantifying, and enhancing transferability, numerous complexities, limitations, and avenues for further paper remain, especially when generalizing to real-world deployment scenarios and a broader landscape of models and datasets.