Papers
Topics
Authors
Recent
Search
2000 character limit reached

Performance under No Attack (PNA) Baselines

Updated 31 March 2026
  • Performance under No Attack (PNA) is a metric characterizing a system’s accuracy and overhead in an adversary-free setting using data drawn from the original distribution.
  • PNA is evaluated through protocols like k-fold cross-validation and metrics such as error rate, test accuracy, and ROC AUC, providing a consistent baseline.
  • PNA serves as a reference to assess the robustness trade-off, highlighting how defense mechanisms impact clean performance before exposure to adversarial threats.

Performance under No Attack (PNA) quantifies the behavior of a classifier, detector, or security enhancement in the absence of adversarial manipulation or malicious activity. It serves as the canonical baseline for any empirical investigation of robustness in adversarial machine learning, intrusion detection, runtime attestation, and related domains. PNA isolates the intrinsic generalization performance or operational overhead of a system, positioning it as the reference point against which the severity and trade-offs associated with adversarial threats are measured.

1. Formal Definition and Theoretical Foundations

PNA is defined as the evaluation of a system’s performance when both training and testing data are drawn from the original, stationary data distribution p0(X,Y)p_0(X,Y), without adversarial modification. Let AA be a Boolean variable indicating if a sample has been tampered with. For PNA, the adversary does not intervene (p(A ⁣= ⁣TY)=0p(A\!=\!T|Y)=0 for all YY), implying:

  • Training and test label marginals revert to ptr(Y)=pts(Y)=p0(Y)p_{tr}(Y) = p_{ts}(Y) = p_0(Y).
  • No adversarial conditional distributions ptr(XY,A=T)p_{tr}(X|Y,A=T) or pts(XY,A=T)p_{ts}(X|Y,A=T) are considered.
  • All samples satisfy ptr(XY,A=F)=pts(XY,A=F)=p0(XY)p_{tr}(X|Y,A=F)=p_{ts}(X|Y,A=F)=p_0(X|Y) (Biggio et al., 2017).

Metrics employed for PNA include error rate (ERR), classification accuracy (ACC), and Receiver Operating Characteristic Area Under the Curve at low false-positive rates (AUC10%\mathrm{AUC}_{10\%}), among others. In biometrics, the Genuine Acceptance Rate (GAR) versus False Accept Rate (FAR) at specified thresholds constitutes the PNA evaluation (Biggio et al., 2017).

2. Evaluation Protocols and Mathematical Formulation

To assess PNA, established resampling protocols such as kk-fold cross-validation or bootstrap sampling generate pairs (DTRi,DTSi)(\mathcal{D}_{TR}^i, \mathcal{D}_{TS}^i) from p0(X,Y)p_0(X,Y). No attack data are injected. Model selection hyperparameters (e.g., SVM CC, kernel γ\gamma, feature set size) are optimized via the chosen metric computed on DTRi\mathcal{D}_{TR}^i, and the system’s final PNA is the mean evaluation metric across all folds (Biggio et al., 2017).

For a model with parameters θ\theta, a loss function L()L(\cdot), and folds i=1ki = 1 \ldots k: PNAi=L[fθ(X),Y] for (X,Y)p0\mathrm{PNA}^i = L\bigl[f_\theta(X), Y\bigr] \text{ for } (X,Y) \sim p_0

PNA=1ki=1kPNAi\mathrm{PNA} = \frac{1}{k} \sum_{i=1}^k \mathrm{PNA}^i

The resulting value represents the performance in the hypothetical, attack-free operational regime.

3. PNA in Detector-Augmented Classification and Security Architectures

For systems that cascade a detector with the main decision function, such as DNN classifiers with adversarial anomaly detectors, PNA must be evaluated conditionally on the detector’s false positive rate (FPR). Here, every input xx is first passed to the detector; only if xx is not flagged as an attack is the DNN classifier invoked. The conditional “clean-data classification accuracy” is then: PNA(α)=classification accuracy on clean samples with DKLτ\mathrm{PNA}(\alpha) = \text{classification accuracy on clean samples with } D_{KL} \leq \tau where the detector threshold τ\tau is set such that FPR(τ)=α\mathrm{FPR}(\tau)=\alpha. Experimental results indicate that reducing FPR as low as 5%5\% leads to a marginal drop in PNA (≤ 0.1 percentage points), confirming that well-calibrated detection incurs negligible trade-off in attack-free accuracy (Miller et al., 2017).

4. Empirical PNA: Representative Results Across Domains

Benchmark results illustrate the typical range and implications of PNA as a baseline:

System/Domain Metric PNA Value(s) Source
MNIST, LeNet–5 Test Accuracy 98.10% (Miller et al., 2017)
CIFAR-10, 16-layer DNN Test Accuracy 89.47% (Miller et al., 2017)
Spam Filtering, LR/SVM AUC10%\mathrm{AUC}_{10\%} 0.056–0.085 (LR), 0.057–0.082 (SVM) (Biggio et al., 2017)
Fingerprint+Face Fusion GAR @ FAR=10310^{-3} 0.90 (Biggio et al., 2017)
Microcontroller Attest. Overhead (Surf, BEEBS) 4.7% geometric mean (Cirne et al., 14 Dec 2025)
Microcontroller Attest. Overhead (CoreMark) 1.1% geometric mean (Cirne et al., 14 Dec 2025)

Additional scenarios include clean-accuracy under noise-augmented training (\sim97.2–97.5% for MNIST) and parameterized feature-selection or kernel settings in traditional pattern classification (Biggio et al., 2017), confirming that PNA is protocol-invariant but application-specific.

5. PNA and the Adversarial Robustness Trade-off

PNA anchors empirical robustness studies by quantifying the initial, “ideal” promise of a system before exposure to adversarial threat models. In security evaluation frameworks, degradation from PNA to performance-under-attack quantifies the system’s specific vulnerability to simulated attacks. For example, classifiers with indistinguishable PNA values may display divergent robustness profiles upon manipulation of feature sets or training data: a system with AUC10%=0.085\mathrm{AUC}_{10\%}=0.085 in spam filtering may degrade gracefully, while others with similar PNA collapse under word obfuscation (Biggio et al., 2017). This differential signals that naive model selection based on PNA alone may be insufficient for adversarial resilience.

In adversarial training settings, traditional methods (e.g., PGD-AT, TRADES) routinely sacrifice several percent of PNA compared to standard models, exposing a tension between robustness and benign-sample generalization (Hu et al., 2024). Null-space constrained adversarial training (NPDA, NPGD) achieves near-zero deterioration in PNA—recovering clean-error rates within 1%\lesssim1\% of the base model—substantially outperforming unconstrained adversarial methods in benign scenarios (Hu et al., 2024).

6. Interpretation in Performance-Overhead and Embedded Systems

In embedded systems security, PNA is realized as the run-time and energy overhead of enabling defense mechanisms relative to the uncompromised baseline. For PACBTI-based runtime attestation, PNA corresponds to the geometric-mean increase in execution time and power consumption for application workloads when the protection is active, but no attacks are present. Observed overheads remain within 1 – 4.7% for industry-standard benchmarks, with function-call-heavy code seeing up to 28% overhead per test (Cirne et al., 14 Dec 2025). These results confirm that robust attestation can be deployed in real-time, resource-constrained environments without violating system specification in the absence of attack.

7. Recommendations and Limitations

PNA serves as both a necessary reference and a limiting criterion. Sound adversarial evaluation requires first establishing high PNA for the application, then tracking its degradation across a spectrum of attack capabilities. However, models that optimize solely for PNA may have significantly inferior robustness. Conversely, designs with modestly sacrificed PNA may be far more resilient under attack. Model selection, defense layering, and empirical “what-if” analysis should therefore use PNA as a baseline but prioritize robustness curves for operational deployment (Biggio et al., 2017). In deep learning, recent innovations such as null-space projected adversarial training demonstrate that zero-deterioration PNA is feasible under structural constraints, but depend on access to a high-accuracy reference model and sufficient latent dimensionality (Hu et al., 2024).

A plausible implication is that the utility of any security intervention or adversarial mitigation must ultimately be measured by its effect on PNA as well as by the delta between PNA and performance under attack. Overly conservative defenses or expensive runtime checks may degrade PNA, negating their security benefits when real-world threat likelihood is low. Hence, reporting PNA and its associated implementation costs establishes the operational relevance of security enhancements across domains.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Performance under No Attack (PNA).