Papers
Topics
Authors
Recent
Search
2000 character limit reached

Label Smoothing (ε=0.06)

Updated 2 February 2026
  • Label smoothing with ε=0.06 is a regularization technique that replaces one-hot labels with a weighted mix of true and uniform distributions to reduce overconfidence.
  • It applies a convex combination using ε=0.06, thereby boosting the true class loss gradient while uniformly diminishing others to promote better calibration and faster convergence.
  • Empirical evidence shows that ε=0.06 not only accelerates convergence but also improves model calibration, generalization, and robustness against label noise and adversarial conditions.

Label smoothing with ϵ=0.06\epsilon = 0.06 refers to a regularization technique in which ground-truth one-hot labels for a classification task are replaced by a convex combination of the original one-hot vector and a uniform distribution over classes, with the mixing strength parameterized by ϵ=0.06\epsilon = 0.06. This modification systematically reduces the confidence (entropy) of model predictions, penalizing overconfidence and leading to well-calibrated probabilities and improved generalization across tasks and domains.

1. Mathematical Formulation and Loss Construction

Uniform label smoothing, as introduced by Szegedy et al., replaces the one-hot target y{0,1}Ky \in \{0,1\}^K for class yy with a "soft" target y~\tilde y defined by

y~j=(1ϵ)1j=y+ϵK,\tilde y_j = (1-\epsilon)\cdot 1_{j=y} + \frac{\epsilon}{K},

where KK is the number of classes and j{1,,K}j \in \{1, \ldots, K\}. For ϵ=0.06\epsilon = 0.06:

  • The target class receives 1ϵ+ϵ/K1-\epsilon + \epsilon/K (e.g., $0.9406$ out of $1$ for K=100K=100).
  • Each non-target class receives ϵ/K\epsilon/K (e.g., $0.0006$ for K=100K=100).

This smoothed label is used in the cross-entropy loss:

LLS=j=1Ky~jlogpj,\mathcal{L}_{LS} = - \sum_{j=1}^K \tilde y_j\, \log p_j,

where pjp_j denotes the predicted probability for class jj. For binary or KK-class logistic regression, smoothed targets take analogous forms, with the loss function modified accordingly (Liu et al., 2020, Chen et al., 2020).

The modification imposes a regularization gradient, inflating the true class loss gradient by +ϵ+\epsilon and deflating others by ϵ/(K1)-\epsilon/(K-1), effectively applying a uniform regularization push toward higher-entropy (less confident) outputs (Xia et al., 2024).

2. Theoretical Underpinnings: Calibration, Generalization, and Information Bottleneck

Label smoothing at ϵ=0.06\epsilon=0.06 acts as a bias-variance regularizer by shifting probability mass from the true class to other classes, controlling generalization loss in the presence of label noise (Chen et al., 2020), and capping the maximum attainable confidence for any class. In the information-theoretical paradigm, uniform label smoothing is equivalent to a variational information bottleneck with bias parameter β=ϵ/(1ϵ)\beta = \epsilon/(1-\epsilon), efficiently trading off between compression and sufficiency of the learned representation (Kudo, 12 Aug 2025).

Uniform label smoothing can be viewed as optimizing

LLS(θ;α)=(1α)H(pdata,qθ)+αDKL(uqθ)+const,\mathcal{L}_{LS}(\theta; \alpha) = (1-\alpha) H(p_{\text{data}}, q_\theta) + \alpha\, D_{KL}(u \| q_\theta) + \mathrm{const},

where uu is the uniform prior, HH is cross-entropy, and DKLD_{KL} is Kullback–Leibler divergence (Meister et al., 2020). This yields an output distribution strictly bounded away from zero, prohibiting exact sparsity in predicted class probabilities.

Moreover, in the generalized framework, label smoothing is one endpoint in a family of entropy-promoting regularizers (α–Jensen divergences)—the α1\alpha\rightarrow1 case—leading to full density in the output and forbidding zeros, which may be undesirable in some structured prediction or interpretable ML tasks (Meister et al., 2020).

3. Practical Effects: Calibration, Robustness, and Empirical Performance

Calibration and Confidence Control

With ϵ=0.06\epsilon=0.06, label smoothing significantly improves confidence calibration and reduces Expected Calibration Error (ECE). On CIFAR-100 (ResNet-18), ECE with uniform label smoothing (ε=0.06) drops from baseline values (e.g., 22.96%22.96\% to 2.35%2.35\% in hist-based prediction ECE), with further gains from similarity-aware smoothing schemes (Liu et al., 2020). These findings reflect that smoothing eliminates over-confident errors and yields more diagonal reliability diagrams, a property empirically confirmed in vision, NLP, and code summarization models (Haque et al., 2023).

Generalization and Robustness

Label smoothing yields improved generalization, especially under label noise or adversarial scenarios. In noisy-label benchmarks, static ε=0.06 label smoothing increases CIFAR-10 test accuracy from 78.2%78.2\% to 84.0%84.0\% at 50%50\% symmetric noise; more aggressive smoothing induces underfitting (e.g., ϵ=0.15\epsilon=0.15 reduces accuracy) (Ko et al., 2022). In text and code generation, ε=0.06–0.1 consistently produces higher BLEU, ROUGE-L, or classification accuracy, with ϵ=0.06\epsilon=0.06 typically providing an optimal trade-off between under- and over-smoothing (Gao et al., 2023, Haque et al., 2023).

In adversarial NLP robustness, higher smoothing (e.g., ϵ=0.100.45\epsilon=0.10-0.45) further lowers attack success rates and adversarial confidence, but the effect is monotonic and plateaus above ϵ0.45\epsilon \sim 0.45—no evidence suggests ϵ=0.06\epsilon=0.06 is sub-optimal, though moderate values between $0.05$–$0.1$ are empirically robust (Yang et al., 2022).

Model Dynamics and Convergence

Label smoothing accelerates convergence: in sentiment classification, LS models reach peak validation accuracy 23×2-3\times faster than one-hot baselines (Gao et al., 2023). Final-layer representations under LS are more separable, as t-SNE projections show tighter, less-overlapping clusters.

Risk-Coverage Trade-off and Selective Classification

A documented downside is that label smoothing at ϵ=0.06\epsilon=0.06 degrades selective classification, i.e., the ability to use model confidence for error rejection. The mechanism involves stronger suppression of the max logit for correct predictions than incorrect ones, collapsing the margin required for accurate scoring of model uncertainty. Post-hoc logit normalization (e.g., pp-norm or mean subtraction at inference) can recover lost selective coverage without hurting top-1 accuracy (Xia et al., 2024).

4. Adaptive and Structured Smoothing Variants

Uniform ϵ\epsilon is rarely optimal across example types or feature-space regions. Several approaches introduce adaptivity:

  • Structural Label Smoothing (SLS): The smoothing parameter ϵc\epsilon_c adapts per-data cluster, increasing for regions of high Bayes error overlap and decreasing for reliably classified areas, mitigating bias introduced by uniform smoothing (Li et al., 2020).
  • Adaptive LS via Instance Uncertainty: Smoothing strength is set per-example as a function of model entropy, growing as the model becomes overconfident during training and suppressing over-confident predictions when needed (Lee et al., 2022).
  • Similarity-Based Smoothing: Replaces the uniform distribution with a class-similarity–weighted prior, further enhancing calibration by assigning more probability mass to semantically/feature-similar classes rather than distributing it evenly (Liu et al., 2020).

These variants empirically outperform fixed uniform smoothing in test error, calibration, and noisy-label robustness, especially in settings with complex, heterogeneous class boundaries.

5. Recommendations for Hyperparameter Selection and Implementation

Practical Guidelines

  • Hyperparameter sweep: Empirically, moderate ϵ[0.05,0.1]\epsilon \in [0.05, 0.1] (including ϵ=0.06\epsilon=0.06) maximizes accuracy and generalization across image, text, and code classification/generation tasks (Gao et al., 2023, Haque et al., 2023, Ko et al., 2022, Li et al., 2020).
  • Label noise: For label noise at rate rr, theory and experiment suggest setting ϵr\epsilon \approx r or slightly above, guided by estimated clean rate aa via ϵ=1a\epsilon=1-a (Chen et al., 2020).
  • Implementation: Most ML frameworks natively support label smoothing. For PyTorch:
    1
    2
    3
    
    import torch
    criterion = torch.nn.CrossEntropyLoss(label_smoothing=0.06)
    loss = criterion(logits, y_true)
    or, for non-integrated cases, manually smooth labels using one-hot plus ϵ/K\epsilon/K formula (Kudo, 12 Aug 2025).

Trade-offs

  • Too small ϵ\epsilon (<0.01<0.01): Marginal benefit over one-hot; latent overfitting.
  • Too large ϵ\epsilon (>0.1>0.1): Under-represents the true class, risks underfitting, reduced accuracy and class discrimination.
  • Instance-adaptive ϵ\epsilon / structured smoothing: Generally yields superior sample efficiency and calibration, eliminates need for tight ε grid search (Lee et al., 2022, Lu et al., 2023, Li et al., 2020).

6. Extensions, Limitations, and Future Directions

  • Entropy Regularization Connections: Label smoothing is a special case (α1\alpha \to 1) of a broader α-Jensen divergence entropy regularizer. Alternative entropic and sparsity-promoting regularizers (e.g., confidence penalty, generalized entropy) may be preferable when sparsity of output distribution is essential (Meister et al., 2020).
  • Calibration versus Uncertainty Ranking: Label smoothing can impair the fidelity of softmax confidence as a selective-rejection criterion. Simple post-hoc logit normalization fully restores the risk-coverage trade-off (Xia et al., 2024).
  • Model Misspecification and Robustness: The main benefit of label smoothing, especially at moderate ϵ\epsilon (e.g., $0.06$), is robustification against label noise and model misspecification. Modifying only the loss function (not probability estimation—e.g., using MLSLR, not LSLR) further improves both calibration and generalization (Yamasaki et al., 2023).
  • Bi-level and Data-driven Regularization: Advanced approaches like LABO generate the entire smoothing distribution optimally per-instance, strictly outperforming any fixed ε baseline at negligible extra cost (Lu et al., 2023).
  • Information Bottleneck View: Label smoothing implements the pragmatic discrete information bottleneck, balancing output compression and sufficiency, and is IB-optimal for models with enough capacity and no label conflicts (Kudo, 12 Aug 2025).

7. Empirical Comparison Table for ϵ=0.06\epsilon=0.06 (Image Classification)

Model/Method Hist-P ECE (%) Top-1 Acc (%) Calibration Strategy
One-hot baseline 22.96 None
Uniform LS (ϵ=0.06\epsilon=0.06) 2.35 Uniform LS
Class-similarity (word2vec) 1.74 Informed LS

Uniform LS at ϵ=0.06\epsilon=0.06 reduces ECE by an order of magnitude relative to the one-hot baseline, with structured/similarity-based smoothing providing an additional 20 ⁣ ⁣40%20\!-\!40\% advantage in calibration, especially valuable in safety-critical decision-making scenarios (Liu et al., 2020).


References: (Liu et al., 2020, Gao et al., 2023, Ko et al., 2022, Li et al., 2020, Yang et al., 2022, Lee et al., 2022, Meister et al., 2020, Lu et al., 2023, Xia et al., 2024, Haque et al., 2023, Kudo, 12 Aug 2025, Chen et al., 2020, Yamasaki et al., 2023)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Label Smoothing (epsilon = 0.06).