Papers
Topics
Authors
Recent
Search
2000 character limit reached

Normalized Relative Offset DCG (N-RODCG)

Updated 7 March 2026
  • The metric N-RODCG is introduced to jointly penalize both the classifier's confidence drop and rank demotion in adversarial settings.
  • It combines a logarithmic discount from traditional DCG with a linear offset for true class displacement, providing a nuanced score.
  • Empirical evaluations on models like VGG19 with ImageNet validate its sensitivity over binary accuracy, aiding robust adversarial analysis.

Normalized Relative Offset Discounted Cumulative Gain (N-RODCG) is a metric for evaluating neural network classifier performance that explicitly incorporates both the rank and confidence of the true class in the output probability vector. Developed to address the inadequacy of classical accuracy-based metrics for adversarial robustness evaluation, N-RODCG combines established ranking principles from information retrieval with model-specific response characteristics, providing a continuous, fine-grained score for multiclass classification systems, particularly in adversarial contexts (Brama et al., 2022).

1. Motivation and Background

Traditional evaluation metrics such as top-1 accuracy or top-k hits yield only coarse, binary feedback about classifier performance, often struggling to distinguish nuanced effects of adversarial attacks or the partial recovery alike. These limitations are especially pronounced in settings where adversarial perturbations cause the true class both to decline in model confidence and to be demoted from the top rank. Discounted Cumulative Gain (DCG) and its normalized form (NDCG), drawn from information retrieval, supply a finer-grained, rank-aware alternative, but their standard instantiations do not account for the classifier’s predicted probability or explicit linear penalties associated with class displacement.

2. Mathematical Formulation

Given a classifier over KK classes producing a normalized probability vector P=(p1,p2,,pK)P = (p_1, p_2, \ldots, p_K) with kpk=1\sum_k p_k = 1, let CC denote the true (ground-truth) class index. The entries in PP are sorted in descending order, and rr^* is defined as the rank of the true class (i.e., r=1r^*=1 if the model assigns the highest probability to the true class). Three quantities are computed:

  • Gain term g=pCg = p_C (the model’s probability assigned to the true class)
  • Discount factor D(r)=1log2(1+r)D(r^*) = \frac{1}{\log_2(1 + r^*)} (penalizing lower ranks logarithmically, as in classic DCG)
  • Relative-offset O(r)=KrK1O(r^*) = \frac{K - r^*}{K - 1} (a linear penalty for displacement, maximal at top rank, vanishing at the bottom)

The Relative-Offset DCG is then given by

RODCG(P,C)=pClog2(1+r)×KrK1.\mathrm{RODCG}(P,C) = \frac{p_C}{\log_2(1 + r^*)} \times \frac{K-r^*}{K-1}.

This explicitly blends the model’s confidence, DCG’s logarithmic discount, and an additional linear offset penalizing deeper rank positions (Brama et al., 2022).

3. Relationship to Standard NDCG

Standard single-relevant-item DCG computes the gain with rel=1rel=1 for the true class: DCG=211log2(1+r)=1log2(1+r)\mathrm{DCG} = \frac{2^1-1}{\log_2(1 + r^*)} = \frac{1}{\log_2(1 + r^*)}, and normalizes by the ideal DCG (when r=1r^*=1). If probabilistic relevance is used, rel=pCrel=p_C, and the gain becomes 2pC12^{p_C}-1, still over the logarithmic discount.

RODCG departs in two respects:

  • It uses pCp_C directly, not 2pC12^{p_C} - 1, as the gain.
  • It multiplies by O(r)O(r^*), thus introducing a linear offset to penalize deeper ranks more heavily than the log discount alone.

This composition “fills in” the missing notion of linear rank-offset inside the DCG template while retaining DCG’s sensitivity to position.

4. Normalization and Range

The maximal RODCG value occurs when the classifier assigns full certainty (pC=1p_C=1) and top rank (r=1r^*=1) to the true class:

RODCGmax=1log2(2)1=1.\mathrm{RODCG}_{\max} = \frac{1}{\log_2(2)} \cdot 1 = 1.

Thus, the normalized version (“N-RODCG”) is simply

N ⁣ ⁣RODCG(P,C)=RODCG(P,C)RODCGmax=RODCG(P,C),\mathrm{N\!-\!RODCG}(P,C) = \frac{\mathrm{RODCG}(P,C)}{\mathrm{RODCG}_{\max}} = \mathrm{RODCG}(P,C),

and has range [0,1][0,1]. The ideal score is always 1, supporting direct interpretation in terms of percent-of-ideal.

5. Illustrative Example

For K=5K = 5, suppose the classifier output is P=[0.1,0.4,0.2,0.2,0.1]P = [0.1, 0.4, 0.2, 0.2, 0.1] with the true class C=3C = 3 so pC=0.2p_C = 0.2. After sorting, the true class is at rank r=2r^* = 2.

The components:

  • g=0.2g = 0.2
  • D(r)=1log2(1+2)0.63093D(r^*) = \frac{1}{\log_2(1+2)} \approx 0.63093
  • O(r)=524=0.75O(r^*) = \frac{5-2}{4} = 0.75

Thus,

RODCG=0.2×0.63093×0.750.09464\mathrm{RODCG} = 0.2 \times 0.63093 \times 0.75 \approx 0.09464

Normalizing,

N ⁣ ⁣RODCG=0.09464\mathrm{N\!-\!RODCG} = 0.09464

6. Rationale for Adversarial Evaluation

N-RODCG is specifically designed for adversarial scenarios in neural network classification:

  • Jointly penalizes confidence and displacement: Adversarial perturbations generally decrease pCp_C and increase rr^*. The multiplicative penalty captures both effects.
  • Continuous, non-binary scoring: Unlike top-1 or top-k accuracy metrics, N-RODCG produces real-valued scores in [0,1][0,1], supporting nuanced comparison among adversarial attacks and defenses.
  • Enhanced position sensitivity: The linear offset term ensures mild penalties for minor rank shifts and severe penalties for deep demotions (e.g., r=100r^* = 100 of K=1000K=1000).
  • Direct normalization: The maximum is always 1, facilitating straightforward percent-of-ideal recovery estimates.

Potential limitations include:

  • Requirement of access to a true-class oracle and known KK.
  • The offset formulation (Kr)/(K1)(K - r^*)/(K - 1) is heuristic and may be tuned for specific tasks.
  • For very small pCp_C at deep ranks, N-RODCG scores can become numerically small and require careful scaling for interpretability (Brama et al., 2022).

7. Empirical Performance and Interpretive Notes

Empirical evaluation on VGG19 with ImageNet demonstrates that N-RODCG outperforms conventional classification metrics in informativeness and distinctiveness when measuring both attack impact and the recovery effect of defenses. By providing a sensitive joint measure of position and model-assigned confidence, N-RODCG enables rigorous fine-grained evaluation across different adversarial and defensive strategies, contributing to improved metric design for neural network robustness assessment (Brama et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Normalized Relative Offset Discounted Cumulative Gain (N-RODCG).