Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Public Announcement Logic in HOL (2010.00810v1)

Published 2 Oct 2020 in cs.AI, cs.LO, cs.MA, and math.LO

Abstract: A shallow semantical embedding for public announcement logic with relativized common knowledge is presented. This embedding enables the first-time automation of this logic with off-the-shelf theorem provers for classical higher-order logic. It is demonstrated (i) how meta-theoretical studies can be automated this way, and (ii) how non-trivial reasoning in the target logic (public announcement logic), required e.g. to obtain a convincing encoding and automation of the wise men puzzle, can be realized. Key to the presented semantical embedding -- in contrast, e.g., to related work on the semantical embedding of normal modal logics -- is that evaluation domains are modeled explicitly and treated as additional parameter in the encodings of the constituents of the embedded target logic, while they were previously implicitly shared between meta logic and target logic.

Citations (2)

Summary

We haven't generated a summary for this paper yet.