A-MPLang: Linear GNN Computation
- A-MPLang is a sublanguage of MPLang defined by pure linear message passing and aggregation without nonlinear activations.
- It expresses node features as linear combinations of walk counts and walk sums up to a given depth, providing an algebraic framework for GNNs.
- Two graphs are indistinguishable under A-MPLang if their local walk-based features match, highlighting both its power and limitation in capturing graph properties.
A-MPLang is the fragment of MPLang, a formal language for expressing graph neural network (GNN)-style computation, consisting of pure linear message passing and aggregation without any nonlinear activation functions. This fragment provides an algebraic and logical foundation for the analysis of GNNs operating in the absence of nonlinearities, isolating the expressive capacity attributable to linear operators and message aggregation alone. The term A-MPLang and its characterization were introduced and studied in the context of the expressive power of logical calculi for GNNs, in particular in "A Logical View of GNN-Style Computation and the Role of Activation Functions" (Barceló et al., 22 Dec 2025).
1. Syntax and Semantics of A-MPLang
A-MPLang is defined as a sublanguage of MPLang, parameterized by input embedding dimension and the absence of activation functions. The grammar for A-MPLang expressions is:
where:
- $1$ denotes the constant expression mapping every node to $1$,
- denotes the -th coordinate of the input embedding at node ,
- is a scalar,
- and denote pointwise addition and scalar multiplication on expressions,
- denotes the sum of over the neighbors: , with the edge set of the graph.
A-MPLang expressions are interpreted on a -embedded graph as real-valued node functions.
2. Expressive Power: Algebraic Characterization
The expressive scope of A-MPLang is succinctly captured by Theorem 4.1 in (Barceló et al., 22 Dec 2025) (Normal Form Theorem): any A-MPLang expression of -depth is numerically equivalent to a linear combination of walk-count and walk-sum features up to length :
where:
- is the count of walks of length starting at ,
- is the sum over -th embedding coordinates at the -th walk endpoint (that is, the sum over all length- walks with , of ).
This establishes that A-MPLang captures all features expressible as linear combinations of local walk-counts and walk-sums, precisely parameterized up to a maximum walk length corresponding to -depth. These features are not closed under arbitrary Boolean combinations or compositional nonlinearities.
3. Equivalence and Distinguishability of Graphs
A critical consequence (Corollary 4.2 in (Barceló et al., 22 Dec 2025)) is that two pointed embedded graphs and are indistinguishable by all A-MPLang expressions of -depth if and only if the sequences of walk-counts and walk-sums up to length match. This yields a complete invariant (up to ) for the linear aggregation expressivity of the model.
4. Relation to Other Logics and GNN Fragments
A-MPLang lies at the base of a hierarchy of logics for GNN computation:
- When activation functions , A-MPLang encompasses only linear computations.
- The addition of bounded, eventually constant activations to MPLang (such as truncated ReLU or Boolean step functions) strictly increases expressive power, subsuming earlier logics capturing Presburger-definable, neighbourhood-counting queries.
- Notably, the Boolean closure of A-MPLang (i.e., moving to arbitrary Boolean combinations of its expressions) is not present in the pure, linear fragment; this closure is obtained only by introducing uneven eventually constant activations.
- The presence of unbounded activations (e.g., ReLU) in MPLang unlocks further strictly greater numerical expressivity, for instance enabling the counting of unbounded imbalances between different node colors ((Barceló et al., 22 Dec 2025), Thm 6.1).
5. Illustrative Examples and Limitations
Concrete instances of A-MPLang expressions include:
- (sum of neighbors’ first feature coordinate),
- (sum over two-hop neighborhoods),
- linear averages (e.g., at each node).
Given the lack of nonlinearity, A-MPLang cannot represent certain queries, such as maximum, parity, or threshold distinctions among features, nor can it simulate unbounded counting tasks that require compositional nonlinearity (e.g., distinguishing between high and low degree in a way not reducible to a linear sum).
6. Significance in the Theory of GNN Expressiveness
A-MPLang serves as a rigorous baseline for characterizing the inherent computational strengths and limitations of GNNs implementing only linear aggregation and message passing. It formally delineates which properties of a node and its graph-theoretic context can be encoded and distinguished in the absence of activation functions, and where this is fundamentally insufficient. The transition from A-MPLang to richer fragments underscores the crucial role of nonlinearity—both bounded and unbounded—in augmenting the expressive power of GNN architectures beyond linear aggregation (Barceló et al., 22 Dec 2025).
7. Connections and Applications
The formal framework of A-MPLang connects with classical graph-theoretic notions of walk enumeration, spectral graph theory, and the algebraic view of message passing. In the context of deep learning on graphs, this fragment precisely characterizes the feature computations realized by GNN architectures that forgo nonlinearities, such as those in some early or analytically tractable GNN models.
The study of A-MPLang has further implications for complexity analysis, invariant construction, and the design of graph algorithms exploiting only local linear aggregation schemes. Its categorical embedding into the broader language MPLang enables principled classification of GNN models by their activation regimes and suggests systematic routes to augment expressivity via carefully selected nonlinearity classes (Barceló et al., 22 Dec 2025).