Papers
Topics
Authors
Recent
Search
2000 character limit reached

Information-Theoretic Decompositions

Updated 20 April 2026
  • Information-Theoretic Decompositions is a framework that partitions mutual information into redundant, unique, and synergistic components using lattice structures and logarithmic measures.
  • The logarithmic decomposition method employs signed measures and Möbius inversion to yield a detailed and geometrically interpretable breakdown of information content.
  • This refined approach offers practical insights for tasks like causal inference, network analysis, and feature selection by quantifying higher-order interactions.

Information-theoretic decompositions, or partial information decompositions (PIDs), aim to partition the mutual information that a set of variables contains about a target into distinct, interpretable components: redundant (shared) information, unique information, and synergistic (complementary or higher-order) information. Over the past decade, this objective has motivated extensive research, yielding several frameworks, axiomatizations, and new geometrical and measure-theoretic tools that can represent, quantify, and refine these decompositions.

1. Structural Foundations: Lattices, Atoms, and Signed Measures

The foundation of information-theoretic decomposition is the attempt to describe how total information—the joint mutual information between a set of variables and a target—can be uniquely and canonically partitioned into nonnegative "atoms". For nn random variables X1,...,XnX_1, ..., X_n and a target SS, Williams and Beer formulated the problem by postulating axioms for a redundancy function I()(S:A1;...;Ak)I_{(\wedge)}(S: A_1; ...; A_k) (with each Ai{X1,...,Xn}A_i \subseteq \{X_1,...,X_n\}). These axioms—global positivity, weak symmetry, self-redundancy, and monotonicity—define a redundancy lattice: the partial-information (PI) lattice, whose elements are antichains of subsets of sources (Bertschinger et al., 2012).

Through Möbius inversion on this lattice, each redundancy function I()I_{(\wedge)} uniquely specifies a decomposition of the total mutual information into non-negative "atoms": redundancy (shared information), unique information (accessible only via a single source), and synergy (information not present in any marginal, only in the joint). This same lattice formalism underlies most contemporary PID proposals, though no canonical redundancy function emerges from the axioms alone.

The signed-measure approach refines and generalizes this picture. In particular, the logarithmic decomposition (LD), as developed in (Down et al., 2023, Down et al., 2024), constructs a concrete signed measure on an outcome space using "logarithmic atoms": sets indexed by all possible outcome patterns of the sources. This yields a geometrically interpretable, uniquely defined, and extremely fine-grained atomic decomposition of entropy and related quantities, strictly refining the earlier I-measure atoms of Yeung.

2. Logarithmic Decomposition: Definition and Properties

The logarithmic decomposition defines a signed measure μ\mu on the power set Σ\Sigma of the outcome space Ω=X1××Xn\Omega = \mathcal X_1 \times \cdots \times \mathcal X_n. The atoms aI,xIa_{I,x_I} correspond to cylinder sets over all non-empty index sets X1,...,XnX_1, ..., X_n0 and their partial assignments X1,...,XnX_1, ..., X_n1. The measure is defined for each atom as: X1,...,XnX_1, ..., X_n2 where X1,...,XnX_1, ..., X_n3 is the marginal probability for X1,...,XnX_1, ..., X_n4. This construction ensures:

  • Additivity: X1,...,XnX_1, ..., X_n5 extends uniquely from atoms to all measurable sets in X1,...,XnX_1, ..., X_n6.
  • Sign Structure: Odd-order atoms contribute positively, even-order atoms negatively.
  • Total Entropy: The sum over all atoms recovers the joint entropy,

X1,...,XnX_1, ..., X_n7

  • Refinement: Coarser regions in Yeung's I-measure are revealed as sums over these finer logarithmic atoms (Down et al., 2024).

This decomposition not only recovers standard quantities (entropy, mutual information, joint entropy, conditional entropy) via union, intersection, and set differences of sets of atoms, but also provides explicit localizations: each atom identifies which outcome-patterns contribute to redundancy, synergy, and so on.

3. Relation to Existing Measures: Coherence and Distinction

The LD framework is strictly finer than previous set-based and I-measure-based approaches:

  • I-measure Recapture: Each I-measure region X1,...,XnX_1, ..., X_n8 of Yeung corresponds to the union over all X1,...,XnX_1, ..., X_n9; the total mass over the atoms exactly equals SS0.
  • Geometric Clarity: The atomization enables direct geometric or diagrammatic visualization of interactions and synergies, facilitating interpretations in high-order systems, network diagrams, or pathway analyses.
  • Distinctiveness: In cases where existing diagrams or I-measures fail to distinguish higher-order interactions—for example, Dyadic and Triadic systems [James & Crutchfield, as cited in (Down et al., 2024)]—the LD framework separates distributions with identical marginals and pairwise mutual informations by their differing patterns of synergy atoms, thereby exposing high-order effects that are otherwise invisible.

4. Applications: From Canonical Examples to General Information Quantities

Logarithmic decomposability extends to a broad class of information-theoretic functionals, beyond entropy and mutual information. Any quantity built as an integer-coefficient linear combination of entropies over unions, intersections, and set-differences of source variables (e.g., total correlation, co-information, interaction information, dual total correlation, and O-information) is LD, since LD atoms linearly refine the I-measure (Down et al., 2023, Down et al., 2024).

A notable example is the O-information SS1, which decomposes as a signed sum of overlaps of LD content sets. Similarly, the Gács–Körner common information (maximizing SS2 over discrete functions simultaneously determined by SS3), can be immediately identified in the geometric LD picture as the largest subset of the content intersection SS4 that itself forms the content of some SS5 (Down et al., 2023). This approach is also structurally compatible with minimally sufficient statistics and related quantities.

The LD structure is not limited to discrete variables; the extension to continuous domains is achieved via integrating over densities, preserving the sign-alternation and additivity structure for differential entropy densities (Down et al., 2024).

5. Comparative Examples and Power

Logarithmic decomposition uncovers key distinctions even among systems that cannot be separated by classical I-measure analysis. In the Triadic versus Dyadic systems of James & Crutchfield, SS6 possess uniform marginals and pairwise independence, but differ in triple-variable dependencies:

  • Dyadic: All information is attributable to singletons and triples; pairwise atoms cancel.
  • Triadic: The signature pattern of triple-order atoms is distinct, corresponding to strictly positive synergy unique to the triadic configuration and absent in the dyadic case.

Thus, LD not only recognizes subtle high-order structures but also enables qualitative and quantitative separation of systems with identical lower-order statistics.

6. Theoretical and Practical Implications

The granularity and explicitness of the logarithmic atomization provides several advantages:

  • Locality: The decomposition assigns information contributions to specific (partial) outcome patterns, supporting fine-grained, interpretable, and potentially operational analysis.
  • Systematic Decomposition of Synergy and Redundancy: By expressing these components as explicit sums of LD atoms subject to sign and positivity constraints, one gains precise measurement of redundant and synergistic contributions at any order.
  • Quality–Led Analysis: This capacity for explicit localization supports refined “quality-led” methodologies in information theory: feature selection, causal inference, network neuroscience, and multi-pathway analyses can all leverage fine-grained information flow patterns revealed by the LD framework (Down et al., 2024).

Moreover, by providing a measure-theoretic and geometric refinement compatible with and extending Yeung's formalism, any future generalization—whether to Tsallis entropy, algorithmic information, or quantum information—has a pathway via the LD atomization.

7. Extensions and Future Directions

The LD framework admits several natural extensions:

  • Continuous Variables: Integration over LD densities enables application to problems in statistical physics, continuous-time processes, and differential entropy calculations.
  • Algorithmic and Topological Generalization: Since LD is canonical and atomic, it opens the way to combinatorial and algebraic topological methods in information theory.
  • Operational and Learning-theoretic Applications: The explicit atomization and its interpretation in terms of outcome- and interaction-patterns can guide learning algorithms, feature selection heuristics, and the empirical discovery of structure in data-rich domains.

Open directions include developing efficient computational methods for high-dimensional systems, further exploring the connections with minimal sufficient statistics and causal inference, and characterizing the operational significance of LD atoms in cryptography and distributed systems.


Logarithmic decomposition provides the finest known atomic refinement of Shannon-type information quantities, strictly refining all traditional set-theoretic or lattice-based diagrams, and offering principled, interpretable localization of redundant, unique, and synergistic information across all scales of multivariate systems (Down et al., 2023, Down et al., 2024).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Information-Theoretic Decompositions.