Papers
Topics
Authors
Recent
Search
2000 character limit reached

From Debate to Decision: Conformal Social Choice for Safe Multi-Agent Deliberation

Published 9 Apr 2026 in cs.AI, cs.MA, and cs.SI | (2604.07667v1)

Abstract: Multi-agent debate improves LLM reasoning, yet agreement among agents is not evidence of correctness. When agents converge on a wrong answer through social reinforcement, consensus-based stopping commits that error to an automated action with no recourse. We introduce Conformal Social Choice, a post-hoc decision layer that converts debate outputs into calibrated act-versus-escalate decisions. Verbalized probability distributions from heterogeneous agents are aggregated via a linear opinion pool and calibrated with split conformal prediction, yielding prediction sets with a marginal coverage guarantee: the correct answer is included with probability ${\geq}\,1{-}α$, without assumptions on individual model calibration. A hierarchical action policy maps singleton sets to autonomous action and larger sets to human escalation. On eight MMLU-Pro domains with three agents (Claude Haiku, DeepSeek-R1, Qwen-3 32B), coverage stays within 1--2 points of the target. The key finding is not that debate becomes more accurate, but that the conformal layer makes its failures actionable: 81.9% of wrong-consensus cases are intercepted at $α{=}0.05$. Because the layer refuses to act on cases where debate is confidently wrong, the remaining conformal singletons reach 90.0--96.8% accuracy (up to 22.1pp above consensus stopping) -- a selection effect, not a reasoning improvement. This safety comes at the cost of automation, but the operating point is user-adjustable via $α$.

Summary

  • The paper presents a novel CSC framework that converts multi-agent debate outputs into calibrated act-escalate decisions via conformal prediction.
  • It aggregates verbalized probability distributions from diverse LLM agents using a linear opinion pool, ensuring marginal coverage through split conformal calibration.
  • Empirical results on MMLU-Pro show that CSC intercepts 81.9% of wrong-consensus errors while achieving high accuracy on automated decisions in low-ambiguity domains.

Conformal Social Choice: Reliable Act-Escalate Decisions in Multi-Agent Debate

Motivation and Problem Statement

Multi-agent debate protocols for LLM ensembles yield accuracy gains through iterative reasoning and information aggregation. However, consensus among agents is not indicative of correctness—agents may converge to confident but incorrect answers via social reinforcement, leading to wrong-consensus failures. Standard consensus-based or majority-vote deployment pipelines commit to automated action once agents agree, with no mechanism to flag such errors or escalate uncertain cases for human review.

The primary operational gap addressed in "From Debate to Decision: Conformal Social Choice for Safe Multi-Agent Deliberation" (2604.07667) is the absence of a calibrated refusal mechanism: distinguishing when to act (automation) versus when to escalate (human-in-the-loop) using only aggregate debate outputs, without model retraining or internal access.

Pipeline Architecture and Theoretical Guarantees

The Conformal Social Choice (CSC) framework formalizes act-versus-escalate decisions as a post-hoc, black-box layer atop multi-agent debate: Figure 1

Figure 1: The CSC pipeline transforms multi-agent debate outputs into operational decisions, guaranteeing marginal coverage via conformal calibration.

  1. Verbalized Probability Elicitation: Heterogeneous LLM agents engage in TT-round debate, outputting explicit option-wise probability distributions via structured prompts. These verbalized scores are subject to parsing and normalization, enabling downstream probabilistic aggregation.
  2. Social Probability Aggregation: Agent forecasts are combined via the linear opinion pool—uniformly weighted unless otherwise specified—resulting in a normalized ensemble probability over all answer options. This aggregation preserves the intensity of agent preferences beyond simple majority voting.
  3. Conformal Calibration (Split Conformal Prediction): Given a held-out calibration set, the ensemble output is converted to nonconformity scores (s(x,y)=1Psocial(yx)s(x, y) = 1 - P_{\text{social}}(y \mid x)). The conformal threshold qq is set to the (1α)(1-\alpha) quantile of calibration scores, guaranteeing that prediction sets C(x)C(x) achieve coverage Pr[yC(x)]1α\Pr[y^* \in C(x)] \ge 1 - \alpha under exchangeability without requiring per-model calibration.
  4. Hierarchical Action Policy: Prediction sets are mapped to operational actions:
    • Singleton (C(x)=1|C(x)| = 1): automated decision.
    • Larger set (C(x)>1|C(x)| > 1): escalate to human review, restricting the candidate pool.
    • Empty set: anomaly flagging.

The procedure reframes multi-agent debate from a point-estimate task to a calibrated-decision problem, aligning operational automation with risk control.

Empirical Analysis on MMLU-Pro

The authors evaluate CSC on MMLU-Pro, a challenging 10-option, 8-domain professional benchmark. The agent ensemble includes Claude Haiku, DeepSeek-R1, and Qwen-3 32B, ensuring model diversity.

Coverage Calibration and Uncertainty Quantification

Empirical coverage closely tracks the user-specified rate (e.g., operating at α=0.05\alpha=0.05, realized coverage is within 1–2% of 95% across domains), validating theoretical guarantees in practice. Figure 2

Figure 2: CSC maintains calibrated coverage across rounds and domains, with average set size adapting to domain difficulty.

Crucially, the average prediction set size adapts to task hardness: in unambiguous domains (Math), singleton rates approach 98.2%, enabling nearly full automation; ambiguous domains (Law) yield set sizes ≈7, prompting high rates of human escalation. This reflects genuine epistemic uncertainty, not system conservatism.

Debate as an iterative process reduces uncertainty: more rounds monotonically decrease the average prediction set size while preserving population coverage.

Failure Modes of Consensus-Based Stopping

Consensus-based stopping is fast but fundamentally unreliable. In the test set, 23.9% of initially-disputed cases converge to unanimous but incorrect consensus. Within high-stakes domains (e.g., Law, Psychology), this convergent error rate exceeds 33%. Figure 3

Figure 3: Without calibrated refusal, consensus stopping commits to high error rates. CSC dramatically reduces error at the cost of automation, with the trade-off adjustable via α\alpha.

This exposes a systemic vulnerability: uncalibrated automation driven by superficial agent agreement yields substantial unflagged errors in deployment.

CSC as a Safety Filter: Selective Automation

The most striking result is that CSC's conformal refusal mechanism intercepts 81.9% of wrong-consensus errors at s(x,y)=1Psocial(yx)s(x, y) = 1 - P_{\text{social}}(y \mid x)0. The remaining automatically resolved instances (conformal singletons) achieve 90.0–96.8% accuracy, a strong positive selection effect. Singleton accuracy gains reach 22.1 percentage points over consensus stopping in high-risk domains (e.g., Law), but this is a statistical filtering effect—CSC is not improving underlying agent reasoning. Figure 4

Figure 4: In practice, CSC forces escalation of cases where debate reaches confident but incorrect consensus, preventing silent failures.

The trade-off is a decrease in the automation rate, particularly in ambiguous domains. This is both principled and operationally adjustable via the s(x,y)=1Psocial(yx)s(x, y) = 1 - P_{\text{social}}(y \mid x)1 parameter.

Error Analysis

Failures introduced by CSC (i.e., incorrectly automated singletons not present under consensus-based stopping) are extremely rare (0.05% of cases at s(x,y)=1Psocial(yx)s(x, y) = 1 - P_{\text{social}}(y \mid x)2), yielding a net error-prevention ratio of 240:1. This underscores CSC's selectivity: over-rejection of correct-consensus cases (abstaining when unnecessary) is the principal source of inefficiency, not risk.

Implications and Future Directions

Practically, CSC provides a rigorous and domain-adaptive mechanism for triaging LLM ensemble outputs in safety- and cost-sensitive applications, reducing the operational risk of over-automation. The method is black-box and post-hoc, requiring only predicted probabilities and a calibration split.

The theoretical guarantee is marginal, not conditional; it calibrates risk over the population rather than per-instance or per-subgroup. Conditional coverage, adaptation to non-stationary environments (e.g., via online conformal methods), or extension to open-ended generation remain open.

Theoretically, CSC formalizes a robust operational contract for multi-agent deliberation, unifying social choice, uncertainty quantification, and calibrated refusal in a single deployable pipeline. Its abstraction as a meta-layer atop ensemble outputs suggests broad applicability across models, agent aggregation schemes, and task settings.

Conclusion

CSC robustly mitigates the risks of automated pipeline deployment for multi-agent LLM ensembles by converting model consensus into distribution-free, coverage-controlled act-escalate decisions. By intercepting the majority of wrong-consensus cases while preserving high accuracy on automatically resolved instances, CSC offers operationally tunable safety as a default deployment layer. Future work should address conditional guarantees, continuous adaptation, and broader task families, but CSC establishes a strong baseline for reliable large-scale automated reasoning.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.