Papers
Topics
Authors
Recent
2000 character limit reached

Comparative Knowledge Assertions

Updated 13 December 2025
  • Comparative Knowledge Assertions are formal frameworks that compare the epistemic capacities of agents, groups, or datasets using advanced modal operators.
  • They integrate modal logic, natural language semantics, and knowledge graph querying to model and infer relative information access and consistency.
  • They underpin scalable, dynamic algorithms that assess knowledge transfer and infer epistemic hierarchy in multi-agent systems.

Comparative knowledge assertions rigorously formalize the act of relating knowledge states—of individuals, groups, or datasets—along a spectrum of epistemic or conceptual power. These assertions appear in epistemic logic, natural language semantics, knowledge graph querying, and machine inference systems. Central themes include the comparison of knowledge potential among agent groups, logical structures supporting comparative statements, inference systems for comparative expressions in language, and algorithmic mechanisms for comparative reasoning at scale.

1. Logical Foundations of Comparative Knowledge Assertions

The comparative-knowledge framework extends standard modal logic with operators that enable the direct comparison of epistemic capacities. In the language LDC\mathcal{L}_{DC\preceq}, the operator ABA\preceq B expresses "group AA knows at least what group BB knows"—a global modal assertion about the structure of agents' accessibility relations, not merely about the truth of specific formulas (Alexandru et al., 6 Dec 2025).

A KTKT-model (S,{Ra}aA,V)(S,\{R_a\}_{a\in\mathcal{A}},V) provides the semantics, where RaR_a are agent accessibility relations (reflexive for KTKT, preorders for S4S4, equivalence relations for S5S5). For any AAA \subseteq \mathcal{A}:

  • RA:=aARaR_A := \bigcap_{a\in A} R_a (distributed knowledge)
  • RA:=(aARa)R^A := \left(\bigcup_{a\in A} R_a\right)^* (common knowledge via reflexive-transitive closure)

The truth condition for comparative assertions:

M,wAB    sS:  wRAswRBsM,w\models A\preceq B \iff \forall s\in S:\; w R_A s \Longrightarrow w R_B s

This encodes a global inclusion between "knowledge powers" of groups, strictly beyond the scope of standard KaK_a-only modal logic (Alexandru et al., 6 Dec 2025).

Key modal axioms for \preceq include:

  • Inclusion: If BAB\subseteq A, then ABA\preceq B
  • Additivity: (ABAC)A(BC)(A\preceq B \wedge A\preceq C) \to A\preceq (B\cup C)
  • Transitivity: (ABBC)AC(A\preceq B \wedge B\preceq C)\to A\preceq C
  • Knowledge Transfer: AB(KBφKAφ)A\preceq B \to (K_B\varphi \to K_A\varphi)

This apparatus supports internal reasoning about comparative epistemic status, such as known superiority (ABKA(AB)A \prec B \to K_A(A \prec B) in S5S5) and equivalence (ABA \equiv B iff ABBAA\preceq B \wedge B\preceq A) (Alexandru et al., 6 Dec 2025).

2. Comparative Reasoning in Natural Language Semantics

Comparative assertions are central to compositional semantic frameworks capable of supporting inference in natural language. CCG-based systems encode gradable adjectives, comparatives, and quantifiers with explicit degree arguments in their logical forms (Haruta et al., 2019, Haruta et al., 2020).

A comparative such as "Mary is taller than Harry" is encoded as:

δ[tall(m,δ)¬tall(h,δ)]\exists\delta\, [\mathrm{tall}(m,\delta)\wedge \neg \mathrm{tall}(h,\delta)]

Chaining rules enable transitive inference ("if Mary > Harry and Harry > Tom, then Mary > Tom"), while monotonicity, antonym reversal (tall/short), and quantifier-scope mechanisms generalize the framework to complex constructions with numerals and logical quantifiers (Haruta et al., 2019, Haruta et al., 2020).

High empirical accuracy is achieved on established benchmarks (e.g., 94% on FraCaS comparatives, exceeding prior logic-based and neural baselines) (Haruta et al., 2019). Such systems pursue full transparency in comparative inference, formalizing patterns like equatives ("as...as") and differential comparatives ("n times as tall as") within degree-theoretic semantics.

3. Machine Knowledge Graphs and Algorithmic Comparative Reasoning

In large-scale knowledge graphs, comparative knowledge and reasoning support both entity-level and group-level queries. Frameworks such as KompaRe introduce "comparative reasoning," defined as identifying and quantifying commonalities and inconsistencies between knowledge segments (KSs), each representing a tightly connected subgraph summarizing one clue (Liu et al., 2020).

For two KSs, overlap and consistency are computed using random-walk graph kernels and influence functions, while inconsistency detection relies on structural features (such as lacking "isTypeOf" paths). The comparative reasoning process, designed for efficiency and scalability, supports queries ranging from pairwise comparisons to collective consistencies across subgraph query sets, maintaining strict polynomial time scaling in subgraph size (Liu et al., 2020).

Similarly, eSPARQL extends standard SPARQL querying of RDF-star graphs with a four-valued epistemic annotation ({,,u,c}\{\top,\bot,u,c\}: true, false, unknown, conflicted) and specialized operators to compare, aggregate, and nest beliefs across multiple agents or sources (Pan et al., 31 Jul 2024). The BELIEF FROM clause enables isolating and comparing agent subcontexts, supporting queries such as "which agents' beliefs conflict on a given statement" or "aggregate all group beliefs on a claim," operationalized over the information and truth bilattices.

4. Computation and Inference in Large-Scale Comparative KBs

NeuroComparatives exemplifies neuro-symbolic large-scale comparative knowledge extraction by combining LLM overgeneration (GPT-2 XL, LLaMA 2-7b, InstructGPT, GPT-4) with stringent statistical and symbolic filtering steps (Howard et al., 2023). Each assertion is templated as a quadruple:

1
(entity₁, entity₂, relation, template)
The pipeline—"overgenerate, filter, distill"—uses neuro-symbolic constraints, deduplication via Sentence-T5 embeddings, contradiction filtering with NLI models, and supervised discrimination. The resulting corpus (up to 8.8M unique comparatives) shows substantial gains in human-judged validity (up to +32% over WebChild), diversity (comparative adjective entropy: 7.9 bits vs. 6.1 for WebChild), and downstream QA performance (~91–93% vs. 61%) (Howard et al., 2023).

This validates the feasibility of acquiring and validating massive comparative KBs for AI reasoning and NLI, with carefully engineered neural-symbolic architectures.

5. Dynamic and Comparative Epistemic Logics

Dynamic epistemic frameworks generalize comparative knowledge assertions to accommodate change—information sharing, database access, or even adversarial actions ("hacking") (Baltag et al., 2021). The static logic is augmented with dynamic modalities [!α]φ[!{\alpha}]\varphi, where α\alpha determines which agents read whose information, updating accessibility relations accordingly.

Reduction axioms ensure logical closure under dynamic updates, with completeness and decidability established via reduction to the static fragment. This enables reasoning about how comparative epistemic positions change after events, e.g., "after sharing, group BB overtakes CC in knowledge" or "newly acquired superiority is known" (Baltag et al., 2021).

Theoretical results show internalization properties—under S5S5, superior groups know their superiority; negative introspection is required for groups to know their inferiority. These comparative modalities enable precise formalization of phenomena like free-riding and joint epistemic superiority, which are not expressible in canonical KaK_a-modal logics (Alexandru et al., 6 Dec 2025, Baltag et al., 2021).

6. Extensions to Similarity and Awareness-Based Comparisons

Comparative assertions are not exclusive to knowledge; they structurally model similarity, as in Comparative Concept Similarity Logic (CSL) (0902.0899). Here, ABA\prec B states "closest AA-models are closer than any BB-models," formalized over minspaces (distance spaces with minimum guarantees), with a complete axiomatization and EXPTIME-complete tableau consistency checking.

Awareness-based epistemic logics further nuance the notion of knowledge: implicit (i\Box_i), explicit (EiE_i), and speculative (SiS_i) knowledge modalities model agents' awareness of propositions and the granularity of their epistemic reach (Ditmarsch et al., 2013). Comparative assertions in these logics are bound to bisimulation invariants for awareness sets, and dynamize via action models, with a proven expressivity collapse in dynamic settings.

7. Synthesis: Scope and Impact of Comparative Knowledge Frameworks

Comparative knowledge assertions unify diverse research avenues: modal logic, formal semantics, large-scale knowledge representation, and belief aggregation. The comparative operator (\preceq or variants) is semantically robust, axiomatically grounded, and algorithmically operational—a single primitive encompassing relative epistemic power across distributed, common, and dynamic knowledge scenarios.

This framework unlocks precise modeling of joint information capacity, belief conflict detection, subjectivity quantification, annotation of knowledge graph assertions, inference across scales (from propositional atoms to ontologies), and the internal epistemic landscape of multi-agent systems. The comparative approach is strictly more expressive than existing modal schemes in its ability to internalize statements about the structure (not just the content) of informational access, and it is fundamental to next-generation knowledge representation, reasoning, and explainable AI (Alexandru et al., 6 Dec 2025, Baltag et al., 2021, Pan et al., 31 Jul 2024, Howard et al., 2023).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Comparative Knowledge Assertions.