Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 161 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 120 tok/s Pro
Kimi K2 142 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Message Passing Complexity (MPC)

Updated 6 September 2025
  • Message Passing Complexity (MPC) is a quantitative metric measuring the operational and communication overhead in distributed and networked systems using message exchanges.
  • It unifies ideas from communication complexity, Graph Neural Network analysis, cryptography, and statistical physics to define lower bounds and tradeoffs in resource usage.
  • MPC informs the design of scalable algorithms and secure protocols by quantifying factors such as message loss, state complexity, and energy or time trade-offs in practical implementations.

Message Passing Complexity (MPC) is a rigorous measure of the operational or communication overhead required for computing, learning, or inference in distributed, networked, or algorithmic systems that exchange information via message passing. Unlike binary expressivity characterizations—which focus on what is or is not computationally possible given a model—MPC provides a quantitative, often task-specific, metric that captures the actual or intrinsic difficulty of a computation when performed by local message exchange along a graph, network, or peer-to-peer system. MPC unifies and extends classical ideas from communication complexity, factor-graph inference, statistical physics, cryptography, and contemporary machine learning (notably, the analysis of Graph Neural Networks), supplying new perspectives on scaling, lower bounds, and efficiency in networked algorithms.

1. Formal and Conceptual Foundations

The concept of MPC is instantiated in a variety of domains; however, its unifying thread is the explicit quantification of resources—information, messages, energy, time—required for correct and efficient computation via message exchanges over a network or system graph. In distributed computation and network coding, MPC is often connected to lower bounds on the total communication bits exchanged to solve a specified instance or class of problems under various stochastic or adversarial models. In the analysis of GNNs, as formalized recently (Kemper et al., 1 Sep 2025), MPC is defined for a task fvf_v on graph GG as

MPC(fv,G)=logP{lossyWLv(G)fv(G)}\mathrm{MPC}(f_v, G) = -\log \mathbb{P} \left \{ \text{lossyWL}^\ell_v(G) \models_{f_v(G)} \right\}

where:

  • \ell is the number of message passing layers,
  • lossyWL\text{lossyWL}^\ell is a probabilistic, lossy variant of the Weisfeiler–Lehman (WL) color refinement mimicking message drops due to over-squashing or capacity constraints,
  • fv(G)\models_{f_v(G)} denotes that the information remaining after \ell rounds suffices to infer fv(G)f_v(G).

In the context of distributed protocols, for example, secure MPC or population protocols, MPC may refer to either the minimum number of bits exchanged for correctness with a given error and privacy regime, or (in resource-constrained regimes) the number of rounds, message bits, or energy required, sometimes under locality constraints or adversarial conditions (Data et al., 2013, Kerenidis et al., 2016, Bartusek et al., 11 Jun 2024).

2. MPC in Distributed and Secure Computation

In multi-party secure computation, MPC often denotes the minimal communication necessary for all parties to jointly compute a function without violating security, privacy, or correctness, possibly in the presence of aborts or adversarial behavior.

  • Information-theoretic lower bounds (e.g., using residual information and entropy inequalities) establish limits on link-wise transcript sizes,

H(M23)max{RI(X;Z),RI(X;Y)}+H(Y,ZX)H(M_{23}) \geq \max\{\mathrm{RI}(X;Z), \mathrm{RI}(X;Y)\} + H(Y,Z|X)

where H()H(\cdot) is entropy, ZZ is output, X,YX,Y are inputs, and RI\mathrm{RI} denotes residual information (Data et al., 2013).

  • In population protocols, MPC refers directly to message alphabet size, distinguishing between internal agent state and externally visible message, and showing that with O(1)O(1)-bit messages and sufficiently large internal state, any computable predicate can be stably computed (with trade-offs between state and message complexity) (Amir et al., 2020).
  • Recent advances for MPC with abort (no agreement required) show protocol designs that achieve total communication

O(n2/hpolylog(n,λ,D))O\left(n^2/h \cdot \mathrm{polylog}\left(n, \lambda, D\right)\right)

where nn is total parties, hh is guaranteed honest parties, λ\lambda is a security parameter, and DD is circuit depth (Bartusek et al., 11 Jun 2024). Tight lower bounds Ω(n2/h)\Omega(n^2/h) are matched, with further tradeoffs appearing between overall communication and per-node locality.

3. MPC as a Continuous Task-Specific Complexity in GNNs

Traditional GNN expressivity theory uses binary tests (typically variants of the Weisfeiler–Lehman test) to characterize what a model can or cannot distinguish. However, expressivity is not predictive of practical model successes or failures and ignores information loss due to over-squashing, depth, or noise.

The MPC framework (Kemper et al., 1 Sep 2025) introduces a continuous, task-and-architecture-sensitive metric:

  • For a given function fvf_v on a node vv in a graph GG, MPC quantifies the negative logarithm of the probability that (after \ell lossy message passing rounds) enough information survives to uniquely determine fv(G)f_v(G). Message loss is modeled as independent Bernoulli trials (with probabilities reflecting random walk reachabilities), and information aggregation follows a (non-invertible) hash-based process, formally:

muv():=ZuvlossyWLu(1),lossyWLv():=Hash(mvv(),  {muv()uN(v)})m^{(\ell)}_{u\rightarrow v} := Z^\ell_{uv} \cdot \mathrm{lossyWL}^{(\ell-1)}_u, \qquad \mathrm{lossyWL}^{(\ell)}_v := \text{Hash}\left( m^{(\ell)}_{v\rightarrow v}, \; \{m^{(\ell)}_{u\rightarrow v} \mid u\in N(v)\} \right)

MPC(fv,G)=log[P{lossyWLv(G)fv(G)}]\mathrm{MPC}(f_v, G) = -\log[\mathbb{P}\{\mathrm{lossyWL}^\ell_v(G) \models_{f_v(G)}\}]

  • The key properties include:
    • MPC predicts infinite complexity when expressivity theory would find the task strictly impossible with bounded message passing (i.e., required information outside the receptive field).
    • For reachable tasks, MPC quantifies the practical “sample complexity” or difficulty, reflecting over-squashing, depth, and graph topology—e.g., log-\log of random-walk probabilities between source and target.
    • The formalism admits refined theorems: e.g., if ff is more fine-grained than gg (fgf \models g), then MPC(fv,G)MPC(gv,G)\mathrm{MPC}(f_v, G) \geq \mathrm{MPC}(g_v, G). Any solution for both ff and gg can be composed with complexity at most the sum of the two.
    • Empirical validation shows MPC correlates with observed performance, failure, and architectural improvements (virtual nodes, cycle-aware GNNs, etc.), outstripping the predictivity of prior binary tests.

4. Algorithmic and Complexity-Theoretic Implications

Throughout network codes, inference, and distributed computation, MPC is closely connected to the reduction of global functional complexity to local message exchanges. Major algorithmic and analytic themes include:

  • Factor graph and sum-product algorithms: Decoding of linear network codes can be rewritten as local message-passing (sum-product) on factor graphs mirroring the network topology (0902.0417). Large linear systems (traditionally O(K3)O(K^3) work) reduce to O(K)O(K) when decoded via localized message passing, as each subproblem exposes low-bandwidth, locally invertible substructures.
  • Reductions, lower bounds, and tradeoffs: Symmetrization and information complexity tools enable tight lower bounds for multiparty communication complexity in both blackboard and message-passing models. E.g., for coordinate-wise XOR or AND, the minimum total communication is Ω(nk)\Omega(nk) bits in the message-passing model (with separations, Ω(nlogk)\Omega(n\log k), in the broadcast model for AND) (Phillips et al., 2011, Braverman et al., 2013). Approximate optimization problems (e.g., graph matching) inherit tradeoffs: to obtain an α\alpha-approximate solution, Ω(α2nk)\Omega(\alpha^2 n k) bits are necessary (Huang et al., 2017).
  • Scalable and adaptive computation: Advanced MPC models, including Adaptive Massively Parallel Computation (AMPC), exploit random access to distributed memory for substantial reductions in round and total message complexity (e.g., O(1)O(1) rounds for maximal independent set versus polylogarithmic in MPC) (Behnezhad et al., 2019).
  • Randomness complexity: Communication lower bounds in secure MPC translate directly to lower bounds on the amount of randomness that must be consumed by any secure protocol. For example, in 3-party secure computations, the required randomness is at least the entropy of any transcript conditioned on both inputs (Data et al., 2013, Kerenidis et al., 2016).

5. Theoretical and Practical Significance

The MPC perspective unifies necessary impossibility and resource bounds with quantitative guidance for practical system and algorithm design.

  • Practical limitations and remedies: MPC captures phenomena such as over-squashing in GNNs (where long-range information is exponentially attenuated or bottlenecked), revealing why adding global shortcut connections (e.g., virtual nodes) or cycle-sensitive operations can increase effective information flow by dramatically reducing MPC values (Kemper et al., 1 Sep 2025).
  • Optimality and separation: Communication-ideal protocols (minimum on all links), and functions with provable superlinear communication cost relative to input length, are identified in secure MPC (Data et al., 2013).
  • Algorithm design: In large-scale, asynchronous, or resource-constrained networks, MPC provides the theoretical foundation to justify architectural choices (local subgraph clustering, committee election, sparse communication graphs) shown in both cryptographic and ML domains (Dani et al., 2013, Bartusek et al., 11 Jun 2024, Wang et al., 2023).

6. Extensions and Ongoing Research Directions

  • Hybrid and heterogeneous methods: New architectures enable locally adaptive message-passing rules (e.g., node-dependent radius of approximation) that target computational effort to dense or critical regions—proven to improve accuracy and speed in practical graphs (Cantwell et al., 2023).
  • Information-theoretic compression and direct sum results: Protocols may be compressed to their public information complexity (PIC), with direct sum theorems linking the cost of many instances to single-instance complexity (Kerenidis et al., 2016).
  • Physical resource-aware complexity: In quantum or analog settings, message passing complexity must account for fundamental resource tradeoffs such as energy versus time (e.g., number of modes and mean photon number in optical quantum protocols), yielding lower bounds of the form

min{μlogm,  mlog(1+μ/δ)}Ω(logD(f))\min \left\{\mu \log m, \; m\log(1+\mu/\delta)\right\} \geq \Omega(\log D(f))

for physical resource growth as function of problem size (Marwah et al., 2020).

  • Integration with learning theory and empirical performance: MPC is serving as a bridge between theoretical understanding, empirical validation, and the design of new models—enabling metrics that not only satisfy worst-case lower bounds, but also explain and predict empirical generalization and failure in realistic settings (Kemper et al., 1 Sep 2025).

Summary Table: Examples of MPC Formalizations

Domain Definition/Bound Main Resource Quantified
Network code decoding Subspace/coset intersections in factor graph Support size, local operations
Secure MPC (3-party) Entropy and residual information per transcript Bits exchanged, randomness
Population protocols Internal state vs. message bit complexity Message alphabet, state size
Distributed (MPC, AMPC) Total/message bits or rounds vs. problem size Bits, rounds, locality
GNN analysis (lossyWL MPC) log-\log success probability of task inference Sample/gradient complexity
Optical quantum protocols min{μlogm,mlog(1+μ/δ)}\min\{\mu \log m, m \log(1+\mu/\delta)\} Energy/time tradeoff

The message passing complexity framework thus transcends traditional dichotomies, providing a fine-grained, resource-sensitive, and empirically meaningful foundation for analyzing and improving message-based distributed computation and learning across diverse domains.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Message Passing Complexity (MPC).