Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neighbor-Consistency Belief (NCB)

Updated 16 January 2026
  • Neighbor-Consistency Belief (NCB) is a formal measure assessing the coherence of an agent's beliefs by comparing predictions among conceptual or feature-space neighbors across domains.
  • NCB is operationalized via logical models, probabilistic weight assignments, and regularization losses to enforce prediction agreement, improving robustness in LLMs and supervised learning.
  • Empirical evaluations show that NCB enhances model accuracy and reliability, such as improving CIFAR-10 accuracy from ~68% to over 94% under noisy conditions.

Neighbor-Consistency Belief (NCB) is a formal and structural notion of belief coherence that arises in multiple domains, exhibiting diverse mathematical instantiations yet sharing a unifying theme: an agent’s belief or model’s prediction is evaluated or regularized not only in isolation, but by how consistent it is with the beliefs or outputs associated to its conceptual or feature-space neighbors. This construct appears in modal logic, robust machine learning, and the evaluation of LLMs, providing rigorous tools for quantifying trustworthiness, robustness, and logical soundness.

1. Formal Semantics: Logical and Probabilistic Foundations

The logical foundation of Neighbor-Consistency Belief traces to conditional neighborhood models and neighborhood logics for belief and knowledge, as described by van Eijck and Li (Eijck et al., 2017). The language LCB\mathcal{L}_{\mathrm{CB}} introduces formulas for conditional belief:

LCB::=p¬ϕϕψBa(ϕψ)\mathcal{L}_{\mathrm{CB}} ::= p \mid \top \mid \neg\phi \mid \phi\wedge\psi \mid B_a(\phi \mid \psi)

where Ba(ϕψ)B_a(\phi \mid \psi) denotes “agent aa believes ϕ\phi assuming ψ\psi.” The semantics are given by conditional neighborhood models (W,N,V)(W, N, V), where WW is a set of worlds, NN assigns to each agent aa, world ww, and XWX \subseteq W, a family Naw(X)N_a^w(X) of subsets, and VV is a valuation. Truth for Ba(ϕψ)B_a(\phi \mid \psi) holds if there is YNaw(ψ)Y \in N_a^w(\llbracket \psi \rrbracket) such that YϕY \subseteq \llbracket \phi \rrbracket. Four central “neighbor-consistency” conditions govern the semantics: compatibility with knowledge, equivalence of conditions, determinacy, and strong commitment. These guarantee properties such as monotonicity and no-inconsistency.

A connection to probability is afforded via epistemic weight models, embedding belief as comparative weight rather than precise probability. For weighted Kripke models, Ba(ϕψ)B_a(\phi \mid \psi) is true at ww iff the sum of weights for worlds where both ϕ\phi and ψ\psi hold in aa’s cell exceeds the sum for ¬ϕψ\neg\phi \wedge \psi, connecting neighbor-consistency with Bayesian and plausibility logics (Eijck et al., 2017).

2. NCB as a Measure in LLMs

In LLMs, Neighbor-Consistency Belief quantifies the robustness of model predictions to contextual perturbations at the level of factual knowledge (Xu et al., 9 Jan 2026). Formally, for a target prompt qq^* with gold entity E\mathcal{E}^*, a “conceptual neighborhood” is constructed as mm related prompts {(qi,ai)}\{(q_i, a_i)\}, each probing attributes, logical implications, or thematic associations involving E\mathcal{E}^*. For each qq, the empirical frequency of correct responses p^(a^=aq)\hat{p}(\hat{a}=a \mid q) is estimated across samples.

The NCB score is defined as:

SNCB=p^(E^=Eq)×i=1m[p^(a^i=aiqi)]1/m\mathcal{S}_{\mathrm{NCB}} = \hat{p}(\hat{\mathcal{E}}^* = \mathcal{E}^* \mid q^*) \times \prod_{i=1}^m \Bigl[ \hat{p}(\hat{a}_i = a_i \mid q_i) \Bigr]^{1/m}

where the geometric mean normalizes across neighborhood size. This compositional, neighborhood-based metric reflects both the model’s accuracy on the target fact and its conformity on semantically related facts, distinguishing robust, structured beliefs (SstructS_{\mathrm{struct}}) from brittle, unstructured ones (SunstructS_{\mathrm{unstruct}}). The metric is motivated by Bayes-optimal odds favoring structured belief when neighbor correctness rates are high.

3. Neighbor-Consistency Regularization in Machine Learning

In robust supervised learning, especially under noisy labels, Neighbor-Consistency Belief (termed Neighbor Consistency Regularization, NCR) is operationalized as an additional regularization loss enforcing prediction agreement among feature-space neighbors (Iscen et al., 2022). Given network parameters (θ,W)(\theta, W), each input xix_i is mapped to a feature vi=gθ(xi)v_i = g_\theta(x_i) and logits zi=hW(vi)z_i = h_W(v_i). Cosine similarity si,js_{i,j} defines the kk-nearest neighbors Nk(i)N_k(i). The neighbor-consistency loss is

LNCB(X;θ,W)=1mi=1mDKL(σ(zi/T)    jNk(i)wi,jσ(zj/T))\mathcal{L}_{\mathrm{NCB}}(X; \theta, W) = \frac{1}{m} \sum_{i=1}^m D_{KL}\big(\sigma(z_i/T) \;\|\; \sum_{j \in N_k(i)} w_{i, j} \cdot \sigma(z_j/T)\big)

Here, wi,jw_{i,j} are normalized cosine similarities, and TT is a softmax temperature. The total training objective is a weighted sum of supervised cross-entropy LS\mathcal{L}_S and neighbor-consistency loss:

L=(1α)LS+αLNCB\mathcal{L} = (1-\alpha) \mathcal{L}_S + \alpha \mathcal{L}_{\mathrm{NCB}}

Hyperparameters α\alpha, kk, TT, and warmup ee control the balance and stability. This framework can be interpreted as the inductive analog of classical label propagation algorithms, now generalized to deep feature spaces and incorporated per mini-batch within SGD (Iscen et al., 2022).

4. Construction and Implementation Methods

Methods for quantifying or enforcing NCB are domain-dependent:

  • Logical Models: The formal calculation of belief entails constructing conditional neighborhood models and checking the neighbor-consistency semantic conditions, often via canonical models and reduction axioms (Eijck et al., 2017).
  • LLM Evaluation: The conceptual neighborhood for a fact is generated through prompt-based LLM templating and automated filters, covering “Entity Prerequisite,” “Logical Implication,” and “Thematic Association” classes. Filters include syntactic, blind-solver, and web retrieval checks, with final expert majority-vote verification (Xu et al., 9 Jan 2026).
  • Supervised Learning: In practical implementation, the feature-wise similarity matrix is computed within each batch. Nearest neighbors are selected by sorting similarities, and targets formed from neighbor label distributions; the KL divergence is backpropagated, with only an extra matrix operation and regularization term added beyond standard training. Warm-up epochs without neighbor-consistency stabilize feature extraction before applying the regularizer (Iscen et al., 2022).

5. Empirical Evaluation and Observed Impact

Empirical results corroborate the effectiveness of NCB across domains:

  • LLMs: Experiments under adversarial “cognitive stress-testing” show that facts with high NCB scores are significantly more robust to both social (peer answer) and authority (source credibility) interference. For instance, under peer quantity stress, Qwen2.5 exhibits a 25.7% accuracy drop for low-NCB versus 16.0% for high-NCB; similar trends are observed across models and stress regimes. The Structure-Aware Training (SAT) objective, which distills context-invariant knowledge from teacher to student, reduces robustness gaps by ≈30% on long-tail facts (Xu et al., 9 Jan 2026).
  • Supervised Classification: In datasets such as CIFAR-10/100 with synthetic label noise up to 80%, Clothing1M (real noisy labels), and web-noise scenarios, NCR consistently improves accuracy and robustness. For example, on CIFAR-10 at 40% label noise, baseline accuracy is ≃68.3%, rising above 94% with NCB plus mixup and ELR. On Clothing1M (1M real noisy labels), NCB increases accuracy from ≃71.7% to ≃74.4%. The method is computationally efficient, requiring only an extra KL-divergence loss and similarity-matrix multiplication per batch (Iscen et al., 2022).

A summary of empirical outcomes:

Setting Baseline Acc. NCB (+modules) Acc. Domain
CIFAR-10@40% noise ~68% >94% Vision
Clothing1M (real) ~72% ~74–75% Vision
Qwen2.5, Peer Stress Low-NCB drop 25.7% LLM
Qwen2.5, Peer Stress High-NCB drop 16.0% LLM

6. Theoretical Insights and Connections

Neighbor-Consistency Belief is closely related to the classical label propagation smoothness term, and its formalizations are consistent with the structure of conditional beliefs in modal logic. In machine learning, NCB’s inductive regularization offers robustness without requiring global graphs or test-time neighborhood propagation. In LLMs, NCB is justified by a Bayesian odds-ratio argument distinguishing structured from unstructured latent beliefs. In modal logic, NCB’s semantic conditions extend earlier neighborhood logics by ensuring closure under public announcement update and supporting compatibility with standard probabilistic models, up to completeness gaps addressed by additional axioms (e.g., Savage’s Sure-Thing Principle) (Eijck et al., 2017).

7. Strengths, Limitations, and Applications

NCB provides structural, not merely pointwise, evaluation of belief or model confidence:

  • Strengths: Predicts robustness under context perturbations, generalizes across tasks (classification, factuality), and is compatible with off-the-shelf models.
  • Limitations: Construction of neighborhoods can incur human and computational overhead; current approaches focus on time-invariant factual knowledge, and the assumption of conditional independence may be violated in some scenarios (Xu et al., 9 Jan 2026).
  • Applications: Automated QA evaluation, risk management in retrieval-augmented generation, quality assurance in label-noise robust learning, and continual-learning curricula prioritizing tightly integrated knowledge.

A plausible implication is that high NCB can serve as a reliability filter in pipeline systems where robustness to context shifts and adversarial interventions is critical.


References

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Neighbor-Consistency Belief (NCB).