Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Consistent Update Synthesis via Privatized Beliefs (2406.10010v1)

Published 14 Jun 2024 in cs.LO and math.LO

Abstract: Kripke models are an effective and widely used tool for representing epistemic attitudes of agents in multi-agent systems, including distributed systems. Dynamic Epistemic Logic (DEL) adds communication in the form of model transforming updates. Private communication is key in distributed systems as processes exchanging (potentially corrupted) information about their private local state should not be detectable by any other processes. This focus on privacy clashes with the standard DEL assumption for which updates are applied to the whole Kripke model, which is usually commonly known by all agents, potentially leading to information leakage. In addition, a commonly known model cannot minimize the corruption of agents' local states due to fault information dissemination. The contribution of this paper is twofold: (I) To represent leak-free agent-to-agent communication, we introduce a way to synthesize an action model which stratifies a pointed Kripke model into private agent-clusters, each representing the local knowledge of the processes: Given a goal formula $\varphi$ representing the effect of private communication, we provide a procedure to construct an action model that (a) makes the goal formula true, (b) maintain consistency of agents' beliefs, if possible, without causing "unrelated" beliefs (minimal change) thus minimizing the corruption of local states in case of inconsistent information. (II) We introduce a new operation between pointed Kripke models and pointed action models called pointed updates which, unlike the product update operation of DEL, maintain only the subset of the world-event pairs that are reachable from the point, without unnecessarily blowing up the model size.

Summary

We haven't generated a summary for this paper yet.