Papers
Topics
Authors
Recent
Search
2000 character limit reached

Incompleteness of AI Safety Verification via Kolmogorov Complexity

Published 6 Apr 2026 in cs.AI | (2604.04876v1)

Abstract: Ensuring that AI systems satisfy formal safety and policy constraints is a central challenge in safety-critical domains. While limitations of verification are often attributed to combinatorial complexity and model expressiveness, we show that they arise from intrinsic information-theoretic limits. We formalize policy compliance as a verification problem over encoded system behaviors and analyze it using Kolmogorov complexity. We prove an incompleteness result: for any fixed sound computably enumerable verifier, there exists a threshold beyond which true policy-compliant instances cannot be certified once their complexity exceeds that threshold. Consequently, no finite formal verifier can certify all policy-compliant instances of arbitrarily high complexity. This reveals a fundamental limitation of AI safety verification independent of computational resources, and motivates proof-carrying approaches that provide instance-level correctness guarantees.

Authors (1)

Summary

  • The paper demonstrates that for any fixed formal verifier, high-complexity AI instances remain unprovable due to intrinsic limits set by Kolmogorov complexity.
  • It employs a counting argument and Chaitin’s incompleteness principle to establish that expanding computational resources cannot overcome the verification gap.
  • The findings motivate a shift towards instance-level, proof-carrying verification methods, including zero-knowledge proofs, to enhance AI safety assurance.

Incompleteness of AI Safety Verification via Kolmogorov Complexity

Summary and Context

The paper "Incompleteness of AI Safety Verification via Kolmogorov Complexity" (2604.04876) delivers a precise formalization of fundamental information-theoretic constraints in AI safety verification. The work pivots from the mainstream narrative that verification limitations are predominantly computational or practical by establishing that the inability to universally certify AI policy compliance arises from the core properties of Kolmogorov complexity and the finitude of formal verification systems. The implications extend beyond resource constraints to the essence of policy verifiability in high-complexity systems.

Problem Formalization

The authors model AI behavior as a tuple x=⟨z,y,Π⟩x = \langle z, y, \Pi \rangle, where zz encodes the input, yy the system's output, and Π\Pi the formal policy specification. The binary encoding enables the predicate P(x)P(x) to capture policy compliance of a particular input-output-policy triple. This shifts the focus from classical property checking on model classes to instance-centric certification.

Formal verification is defined through computably enumerable proof systems: for each xx, verification entails proving P(x)P(x) is true within a fixed formal theory TT. Kolmogorov complexity K(x)K(x) is used as the canonical measure of instance information content, following standard definitions over universal prefix-free Turing machines.

Main Result: Incompleteness via Kolmogorov Complexity

The principal theorem proves that for any sound, computably enumerable, sufficiently expressive formal verifier TT, there exists a threshold zz0 such that—beyond that Kolmogorov complexity—there exist true policy-compliant instances whose compliance cannot be certified by zz1. This is not a computational hardness result but rather an information-theoretic incompleteness: the finite descriptive capacity of any formal system makes complete certification structurally impossible for sufficiently complex instances.

This argument critically depends on two technical pillars:

  1. Richness Assumption: There is a linear-sized set of high-complexity strings that are policy-compliant, which holds in realistic settings with diverse permissible behaviors.
  2. Chaitin’s Incompleteness Principle: In any sound, computably enumerable formal system, one cannot prove arbitrary high Kolmogorov complexity for specific strings.

By leveraging a counting argument, the paper establishes that many policy-compliant encodings are incompressible, making it impossible for zz2 to generate their proofs without contradiction (as the language for proofs remains bounded). This extends Chaitin’s result from abstract mathematics to concrete policy verification of AI systems.

Theoretical and Practical Implications

A central implication is the irreducible gap between policy-compliance (true safety) and formal certification (provable compliance) in complex AI systems. This cannot be circumvented by expanding verification resources, increasing computational power, or scaling up empirical testing. Any universal, fixed formal verifier will—by necessity—be incomplete. The inability to globally characterize all valid behavior undermines safety assurance through classical verification.

This incompleteness result challenges dominant approaches in certified AI, which target scalability or expressiveness without addressing foundational undecidability. It undermines the premise that increasing model transparency or verification capability can alone achieve comprehensive coverage.

The key constructive insight is that proof-carrying or certificate-based paradigms provide a viable response. By establishing per-instance correctness via explicit, instance-specific proofs, it becomes feasible to achieve meaningful safety guarantees in critical deployments. These approaches shift the burden from global characterization to efficiently verifiable, cryptographically rooted instance certification, as in the Hermes Seal framework (Hasan et al., 27 Mar 2026).

The result validates a shift in the verifiable AI literature towards paradigms inspired by zero-knowledge proofs, zk-SNARKs, and verifiable computation [gennaro2010verifiable, groth2016snark, ben2019scalable]. This trajectory is technologically aligned with scalable, interoperable safety artifacts for complex, distributed AI ecosystems.

Relevance for Autonomous and Perceptual Systems

Modern autonomous and multi-modal perception systems rely on complex, high-dimensional input spaces and neural outputs, satisfying various dynamic safety policies. The result proves that there will always be system configurations—no matter how semantically safe or compliant—which elude certification by any fixed, global, policy checker. This applies to reachability-based checking, constraint encodings, and even hybrid approaches prevalent in neural verification (Katz et al., 2017, Huang et al., 2016, Cohen et al., 2019), indicating the universal nature of the revealed limitation.

By extension, safety assurance for autonomous vehicles, human-in-the-loop decision systems, and high-assurance critical autonomy must integrate instance-level certification schemes. The incompleteness theorem thus sets a theoretical ceiling for conventional verification research and mandates architectural change.

Directions for Future Research

The main theorem motivates several lines for future investigation:

  • Design of Universal Proof-Carrying Verification Protocols: Investigation into efficient, scalable, and robust certificate-based frameworks for policy compliance, including cryptographic and zero-knowledge systems for explainable AI.
  • Instance-Aware Assurance Metrics: Development of safety metrics and trust models that rely on the aggregation of per-instance certificates, rather than coverage of behavioral or input spaces.
  • Impact on Policy Specification Languages: Formalism and analysis of policy languages that facilitate proof generation in highly dynamic settings, potentially tightening the gap between policy expressiveness and certifiability.

Conclusion

The paper establishes a formal, information-theoretic incompleteness barrier for AI policy verification, demonstrating that universal safety certification for all compliant behaviors in high-complexity domains is unattainable via any fixed formal system. This prompts a methodological transition from universal, detection-based verification to instance-level, proof-driven assurance. The result is highly consequential for the future of verifiable and trustworthy AI system design and opens new avenues in the theory and systems of AI safety verification.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.