Papers
Topics
Authors
Recent
2000 character limit reached

Paradox Severity Indexing

Updated 30 November 2025
  • Paradox severity indexing is a quantitative framework that measures the magnitude and impact of paradoxes in multivariate evaluation systems.
  • In LLM evaluation, the Paradox Severity Index quantifies divergence between judge-based conceptual accuracy and binary scoring, highlighting cases of measurement failure.
  • By extending to weighted multi-issue voting, the index exposes methodological incoherence and enables transparent cross-domain comparisons.

Paradox severity indexing is a set of quantitative frameworks designed to measure the magnitude and impact of paradoxes arising in multivariate evaluation systems, particularly when different scoring modalities or aggregative schemes yield conflicting notions of “consensus” or correctness. This approach is especially salient in domains where classical test theory, judgment-based scoring, and exact-match metrics interact nontrivially, such as AI benchmark evaluation and multi-issue social choice. Paradox severity indices provide tight, interpretable numeric bounds on the worst-case discrepancy between intuitive, composite, or majority-based outcomes and those preferred under alternate or theoretically “natural” regimes, highlighting regions of methodological incoherence and enabling transparent cross-model and cross-domain comparisons.

1. Paradox Severity Index in LLM Evaluation

The Paradox Severity Index (PSI), as introduced in "The Catastrophic Paradox of Human Cognitive Frameworks in LLM Evaluation" (Reddy, 23 Nov 2025), quantifies the extent to which traditional binary scoring regimes diverge from judge-based conceptual accuracy in the evaluation of frontier LLMs. Specifically, PSI up-weights this divergence by the model’s Classical Test Theory (CTT)–scaled IQ, exposing cases where higher measured intelligence coincides with catastrophic measurement failure.

Formally, for model ii: PSIi=JudgeAcciBinaryAcci×IQCTT,i100\mathrm{PSI}_i = |\mathrm{JudgeAcc}_i - \mathrm{BinaryAcc}_i| \times \frac{\mathrm{IQ}_{\mathrm{CTT},i}}{100} where

  • JudgeAcci\mathrm{JudgeAcc}_i is the mean LLM-as-judge conceptual accuracy,
  • BinaryAcci\mathrm{BinaryAcc}_i is the mean exact-match binary accuracy across the same items,
  • IQCTT,i\mathrm{IQ}_{\mathrm{CTT},i} is the model’s scaled IQ score.

A worked example demonstrates the index:

  • If JudgeAccX=0.48\mathrm{JudgeAcc}_X = 0.48, BinaryAccX=1.00\mathrm{BinaryAcc}_X = 1.00, IQX=100\mathrm{IQ}_X = 100,
  • The raw gap is 0.481.00=0.52|0.48 - 1.00| = 0.52,
  • PSIX=0.52×1.0=0.52_X = 0.52 \times 1.0 = 0.52.

Empirical values (Table 7, (Reddy, 23 Nov 2025)) range from 0.39 to 0.60 across nine state-of-the-art LLMs; higher PSI indicates greater paradoxical misalignment between individually plausible metrics.

2. Severity Indexing in Weighted Multi-Issue Voting

In multi-issue collective decision making, paradox severity indexing addresses the systematic divergence between issue-wise majority and majority-supported proposals in the context of weighted binary voting, including Anscombe’s and Ostrogorski’s paradoxes. The critical parameter is the maximum average topic weight, denoted w~max\tilde{w}_{\max}, representing the highest concentration of voter weight across topics.

For nn voters over tt binary issues, each with a unit-sum weight vector wi[0,1]tw_i \in [0,1]^t: w~j=1ni=1nwi,j,w~max=maxj[t]w~j\tilde w_j = \frac{1}{n} \sum_{i=1}^n w_{i,j}, \quad \tilde w_{\max} = \max_{j \in [t]} \tilde w_j

The worst-case distance from issue-wise majority gg_{\ell} over all cases with w~max=(0,1)\tilde w_{\max} = \ell \in (0,1) is bounded piecewise: $g_{\ell} \leq \begin{cases} \frac{1}{2} + \frac{\ell}{2}, & 0 < \ell < \frac{1}{3} \[6pt] 1 - \ell, & \frac{1}{3} \leq \ell \leq \frac{1}{2} \[6pt] \ell, & \frac{1}{2} < \ell < 1 \end{cases} \tag{%%%%15%%%%}$ This bound is tight for a dense set of \ell and all >1/2\ell > 1/2 (Baharav et al., 20 Feb 2025).

3. Interpretation Bands and Heuristics

In LLM evaluation (Reddy, 23 Nov 2025), PSI bands are heuristically interpreted:

  • PSI <0.30< 0.30: Mild paradox—judge and binary scores largely agree.
  • 0.300.30 \leq PSI 0.50\leq 0.50: Moderate paradox—substantial misalignment.
  • PSI >0.50> 0.50: Severe paradox—catastrophic measurement failure.

In weighted voting (Baharav et al., 20 Feb 2025), w~max\tilde{w}_{\max} quantifies severity: small w~max\tilde{w}_{\max} corresponds to mild paradoxes (per issue consensus aligns closely with proposal consensus), while large w~max\tilde{w}_{\max} allows paradox-induced deviations up to full disagreement.

4. Complementarity with Item Response Theory and Judge Validation

PSI supplements latent ability modeling (2PL IRT; θi\theta_i for ability, bjb_j for difficulty, aja_j for discrimination), enabling a dual-axis analysis: ability and paradoxical misalignment. Judge-vendor validation—using rubric-based, cross-vendor LLM-as-Judge protocols—guarantees that JudgeAcci\mathrm{JudgeAcc}_i isolates conceptual correctness robustly. PSI’s interpretability is conditional on rigorous conceptual scoring; absence of such undermines reliability.

In voting, w~max\tilde{w}_{\max} is computed in O(t)O(t) time and flags “dangerous” distributions: high concentrations of topic weight induce severe paradoxes, while evenly distributed weights align with classical consensus bounds.

5. Generalization to Other AI and Social Choice Evaluation Domains

Paradox severity indexing is extensible wherever two qualitatively distinct scoring methods (one conceptually focused and validated) exhibit systematic divergence, and where a normative scaling factor (IQ, ELO, percentile) is available. Example applications:

  • Computer vision: pixel-level matches vs. human annotation, weighted by validated gold-standard accuracy.
  • Dialogue systems: automated BLEU vs. human perception, possibly multiplied by fluency or coherence norms.
  • Robotics: sensor-driven success rates vs. expert evaluations, up-weighted by capability scores.

Necessary conditions for generalization: (a) Two complementary, domain-relevant scoring methods, (b) Reliable conceptual (often human or judge-based) scoring, (c) Anchoring by a cross-system scale where larger model-level gaps induce higher index values.

This suggests paradox severity indices can systematically expose architecture-specific failures in metric validity, facilitating domain-aware, paradigm-shifting evaluation frameworks.

6. Wagner’s Rule and Sufficient Paradox-Preclusion Conditions

A consequence in weighted multi-issue voting (Baharav et al., 20 Feb 2025) is a sufficient bound for paradox avoidance: If the average majority m34\overline{m} \geq \frac{3}{4} (sum of issue-wise majorities weighted by w~j\tilde w_j and normalized), then Anscombe's paradox cannot occur; the issue-wise majority is protected against defeat. This complements the severity index by establishing consensus thresholds below which paradoxes are impossible, irrespective of topic weight distribution.

7. Computational and Practical Considerations

Both PSI and w~max\tilde{w}_{\max} are simple to compute, scale linearly in the number of tasks/issues, and can be incorporated into automated benchmark reporting. In practice, high index values alert to domains, regimes, or configurations where standard evaluation collapses, motivating further methodological refinement and development of substrate-sensitive assessment protocols. This framework supports transparent, numeric quantification of measurement collapse and sharpens the boundary between biologically-grounded and architecture-native testing in AI and voting systems.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Paradox Severity Indexing.