Papers
Topics
Authors
Recent
Search
2000 character limit reached

Guided Complement Entropy (GCE) Loss

Updated 27 February 2026
  • GCE is a loss function that enhances adversarial robustness by modulating incorrect class probabilities based on model confidence.
  • It introduces a guidance factor powered by α to balance standard classification accuracy with defense against adversarial attacks.
  • GCE serves as a drop-in replacement for cross-entropy, delivering improved clean and adversarial performance without extra computational cost.

Guided Complement Entropy (GCE) is a loss function for deep neural network classification, introduced to enhance adversarial robustness without incurring the usual computational or data overhead associated with adversarial training or distillation. GCE simultaneously encourages high confidence for the ground-truth class and explicitly neutralizes the probabilities assigned to incorrect classes through a guidance mechanism, resulting in improved resistance to adversarial perturbations while potentially improving standard classification accuracy. All empirical and methodological details in this article are drawn from "Improving Adversarial Robustness via Guided Complement Entropy" (Chen et al., 2019).

1. Formal Definition and Mathematical Formulation

GCE operates on standard KK-way classification with softmax outputs. For NN training samples (xi,yi)(x_i, y_i), denote network softmax outputs as y^i=(y^i1,,y^iK)\hat{y}_i = (\hat{y}_{i1},\ldots,\hat{y}_{iK}), with ground-truth class index gyig \equiv y_i. GCE is derived from the complement entropy loss, which disperses probability among all incorrect classes, but introduces a guiding factor to modulate this effect based on model confidence.

  • Complement Entropy:

LCE(θ)=1Ni=1Njg(y^ij1y^ig)log(y^ij1y^ig)L_{\mathrm{CE}}(\theta) = -\frac{1}{N} \sum_{i=1}^N \sum_{j \ne g} \left( \frac{\hat{y}_{ij}}{1 - \hat{y}_{ig}} \right) \log \left( \frac{\hat{y}_{ij}}{1 - \hat{y}_{ig}} \right)

  • Guided Complement Entropy (unnormalized):

LGCE(θ)=1Ni=1Ny^igαjg(y^ij1y^ig)log(y^ij1y^ig)L_{\mathrm{GCE}}(\theta) = -\frac{1}{N} \sum_{i=1}^N \hat{y}_{ig}^{\alpha} \sum_{j \ne g} \left( \frac{\hat{y}_{ij}}{1 - \hat{y}_{ig}} \right) \log \left( \frac{\hat{y}_{ij}}{1 - \hat{y}_{ig}} \right)

with guidance exponent α>0\alpha > 0.

  • Normalized GCE (used in practice):

LGCE-norm(θ)=1Ni=1Ny^igα1log(K1)jg(y^ij1y^ig)log(y^ij1y^ig)L_{\mathrm{GCE}\text{-}\mathrm{norm}}(\theta) = -\frac{1}{N} \sum_{i=1}^N \hat{y}_{ig}^{\alpha} \frac{1}{\log(K-1)} \sum_{j \ne g} \left( \frac{\hat{y}_{ij}}{1-\hat{y}_{ig}} \right) \log \left( \frac{\hat{y}_{ij}}{1-\hat{y}_{ig}} \right)

This normalization ensures the inner term is supported on [0,1][0,1].

The only new hyperparameter introduced is α\alpha, which governs how rapidly the complement flattening is enabled as model confidence in the ground-truth class rises.

2. Motivation and Theoretical Underpinnings

Standard cross-entropy (XE) training exclusively rewards increasing the correct class probability, neglecting how error mass is distributed among incorrect classes. Complement entropy, in contrast, maximizes the entropy over the incorrect class distribution, penalizing peaky confusion, and empirically increasing the minimal adversarial perturbation required for misclassification.

The GCE guidance factor y^igα\hat{y}_{ig}^{\alpha} suppresses complement flattening early in training, allowing efficient initial learning of ground-truth classes. As y^ig\hat{y}_{ig} grows and the model demonstrates confident predictions, the suppression is relaxed and complement entropy regularizes the distribution over the remaining classes. Synthetic 3-class experiments in the source demonstrate that α1/3\alpha \approx 1/3 or $1/4$ balances the learning dynamics, avoiding both slow convergence (if α\alpha is too large) and instability (if α\alpha is too small).

3. Training Methodology and Implementation

GCE is implemented as a drop-in replacement of the standard cross-entropy loss, without modification to data pipeline, model architecture, or optimizer. The method requires no adversarial example generation, distillation, or additional teacher/student models.

Core training pseudocode:

1
2
3
4
5
6
7
8
9
10
11
12
Given: dataset D={(xᵢ,yᵢ)}, model f_θ, hyperparameter α.
for epoch=1T:
  for batch BD:
    Compute logits zᵢ = f_θ(xᵢ), probabilities ŷᵢ = softmax(zᵢ).
    for each sample i in B:
      g = yᵢ
      p_g = ŷᵢ_g
      for j  g:
        qᵢⱼ = ŷᵢⱼ / (1p_g)
      L_i = p_g^α * (1/ log(K1)) * _{jg} qᵢⱼ log qᵢⱼ
    L_batch =  mean_iB L_i
    θ  θ  η _θ L_batch

Thus, the approach realizes "adversarial defense for free", adding negligible computational cost compared to traditional adversarial training.

4. Experimental Evaluation and Results

Empirical validation was conducted on MNIST (LeNet-5), CIFAR-10/CIFAR-100 (ResNet-56), and Tiny ImageNet (ResNet-50), across standardized training schedules. Selected experimental results:

α MNIST err% CIFAR-10 err% CIFAR-100 err% Tiny-IM err%
XE 0.80 7.99 31.90 39.54
1/2 0.61 9.18 40.59 43.36
1/3 0.67 7.18 31.75 38.56
1/4 0.64 6.93 31.80 38.69
1/5 0.68 6.91 31.48 38.26

On CIFAR-10, GCE outperformed XE under adversarial attacks: for example, under FGSM attack with ϵ=0.04\epsilon=0.04, accuracy rose from 14.76% (XE) to 41.22% (GCE), and under worst-case PGD (40-step), XE achieved 0% while GCE preserved ≈5.9% robust accuracy.

White-box attacks included FGSM, BIM, PGD, MIM (\ell_\infty), JSMA, and Carlini–Wagner (2\ell_2). Robustness improvements persisted when GCE was employed in adversarial training frameworks (e.g., PGD), yielding further accuracy gains in settings such as MNIST and CIFAR-10.

Ablation studies on α demonstrated its impact on descent speed and stability, supporting the adoption of α[1/3,1/4]\alpha \in [1/3, 1/4] across tasks. All these results are documented in detail in (Chen et al., 2019).

5. Characteristics, Benefits, and Orthogonality

GCE achieves adversarial robustness without auxiliary procedures, models, or training data, fulfilling "adversarial defense for free." It enhances both clean and adversarial accuracy relative to XE. The mechanism acts orthogonally to adversarial training methods: GCE may replace XE in frameworks such as PGD or TRADES for further robustness improvement.

The explicit flattening of probability mass among non-ground-truth classes widens the margin in probability space and yields more separable latent feature clusters. This is corroborated by visualization (t-SNE) in the original study.

6. Limitations and Open Research Questions

GCE remains empirically motivated, lacking formal robustness certificates or theoretical guarantees for worst-case adversarial perturbation. The optimal α is dataset- and K-dependent, with normalization partially mitigating but not eliminating this sensitivity.

All studies to date evaluated GCE on image-classification benchmarks up to 200×200200 \times 200 resolution and K200K \leq 200; its scalability to large-scale datasets such as ImageNet-1K or tasks such as dense prediction is not established. As with other gradient-based defenses, it is potentially vulnerable to stronger adaptive attacks or higher-order adversaries.

Open questions include formalizing robustness guarantees, optimizing α selection methodology, and extending applicability beyond image classification (Chen et al., 2019).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Guided Complement Entropy (GCE).