Papers
Topics
Authors
Recent
Search
2000 character limit reached

Groenewold's Info Gain in Quantum Measurements

Updated 27 January 2026
  • Groenewold’s information gain is defined as the net entropy reduction from a quantum measurement, calculated as the difference between the initial state entropy and the average post-measurement entropy.
  • It operationally connects information extraction, thermodynamic work, and approximate reversibility by employing fidelity bounds and monotonicity properties.
  • The framework underpins applications in quantum measurements, resource theories, and error correction by serving as a universal monotone to evaluate measurement strategies.

Groenewold’s information gain, also known as “entropy reduction,” quantitatively characterizes the amount of information extracted from a quantum (or more generally, probabilistic) system by a measurement process. Originally introduced by H.J. Groenewold as the difference between the initial entropy and the average post-measurement entropy, this measure captures both operational and thermodynamic aspects of measurement. Modern developments have extended its interpretation, established monotonicity properties, and linked it to approximate reversibility, resource theories, and the second law of thermodynamics in both quantum and generalized probabilistic frameworks (Buscemi et al., 2016, Minagawa et al., 2022).

1. Definition and Mathematical Formulation

Let ρAD(HA)\rho_A \in D(\mathcal{H}_A) be an input state and {Nx}\{\mathcal{N}^x\} a quantum instrument, where each Nx:L(HA)L(HA)\mathcal{N}^x: \mathcal{L}(\mathcal{H}_A) \rightarrow \mathcal{L}(\mathcal{H}_{A'}) is completely positive with trace 1\le 1 and xNx\sum_x \mathcal{N}^x is trace-preserving. The probability of outcome xx is p(x)=Tr[Nx(ρA)]p(x) = \operatorname{Tr}[\mathcal{N}^x(\rho_A)], and the post-measurement state is ρAx=Nx(ρA)/p(x)\rho_{A'}^x = \mathcal{N}^x(\rho_A)/p(x). Using the von Neumann entropy H(σ):=Tr[σlogσ]H(\sigma) := -\operatorname{Tr}[\sigma\log\sigma], Groenewold’s information gain is

G(ρA,{Nx}):=H(ρA)xp(x)H(ρAx).G(\rho_A, \{\mathcal{N}^x\}) := H(\rho_A) - \sum_x p(x) H(\rho_{A'}^x).

For quantum measurements represented by completely positive instruments, this directly reflects the average entropy reduction due to the measurement process (Buscemi et al., 2016).

In more general probabilistic theories, given an instrument s={sj}s = \{s_j\} and state ρ\rho, the Groenewold–Ozawa information gain is

IG(ρ,s):=H(ρ)jej(ρ)H(sj(ρ)/ej(ρ)),I_G(\rho, s) := H(\rho) - \sum_j e_j(\rho) H\big(s_j(\rho)/e_j(\rho)\big),

where ej(ρ)e_j(\rho) is the probability of outcome jj and H()H(\cdot) is the unique “spectral entropy”, i.e., the minimal uncertainty compatible with the operational structure and the second law (Minagawa et al., 2022).

2. Historical Development and Operational Foundations

Groenewold originally defined information gain as “entropy reduction,” HbeforeHafterH_\text{before} - \langle H_\text{after}\rangle, interpreting it as the information acquired from a quantum measurement. He noted, however, that for general (inefficient) measurements, this quantity could become negative [G71 in (Buscemi et al., 2016)]. The modern approach, developed by Buscemi et al. and others, reframes information gain as quantum mutual information I(R;X)σI(R;X)_\sigma, coinciding with G(ρ,{Nx})G(\rho, \{\mathcal{N}^x\}) for efficient measurements and always non-negative due to strong subadditivity (Buscemi et al., 2016).

Recent operational reconstructions of quantum theory and thermodynamic arguments have generalized the concept further, recognizing Groenewold’s information gain as a universal monotone for measurement processes, rooted in the second law and physical implementability (Minagawa et al., 2022). In these frameworks, information gain is directly related to extractable work in feedback cycles involving semipermeable membranes and ideal gases, leading to a general spectral entropy functional and a thermodynamic interpretation of measurement-induced entropy reduction.

3. Approximate Reversibility and Quantitative Bounds

A key result links small Groenewold information gain with approximate reversibility of the measurement process. In the absence of quantum side information (QSI), suppose φRA|\varphi\rangle_{RA} purifies ρ\rho, and {Nx}\{\mathcal{N}^x\} is an efficient instrument. Defining the post-measurement state σRX\sigma_{RX},

I(R;X)σlogF(σRX,σRσX),I(R;X)_\sigma \geq -\log F(\sigma_{RX}, \sigma_R \otimes \sigma_X),

where F(,)F(\cdot, \cdot) is the fidelity [(Buscemi et al., 2016), Theorem III.1]. In particular, for each outcome xx there exists an isometry UxU^x such that

I(R;X)σ2log[xp(x)F(Ux(φRAρx),φRAρ)].I(R;X)_\sigma \geq -2 \log \left[ \sum_x p(x) \sqrt{ F(U^x(\varphi_{RA'}^{\rho_x}), \varphi_{RA}^\rho) } \right].

Small information gain—i.e., small G(ρ,{Nx})G(\rho, \{\mathcal{N}^x\})—implies the measurement can be approximately reversed on average, formalized via high average fidelity recovery channels for each measurement outcome. This is operationally significant for both measurement-based information processing and error correction.

With quantum side information, analogous bounds hold, with I(R;XB)I(R;X|B) replacing I(R;X)I(R;X) and the recovery map acting on the ancillary system [(Buscemi et al., 2016), Theorem III.2].

4. Monotonicity and Instrument Ordering

Groenewold–Ozawa information gain obeys a monotonicity principle with respect to an operational preorder—Groenewold majorization—on measurement instruments. For a fixed ρ\rho, tρst \succeq_\rho s if there exists a stochastic matrix p(jk)p(j|k) with sj(ρ)=kp(jk)tk(ρ)s_j(\rho) = \sum_k p(j|k)t_k(\rho) for all jj, i.e., the output of ss can be simulated from tt by classical post-processing of outcomes.

Under this relation, information gain is monotone:

IG(ρ,t)IG(ρ,s).I_G(\rho, t) \geq I_G(\rho, s).

This result relies on the concavity of the spectral entropy and extends to all operational settings where the assumptions of spectrality and pure-state reversibility hold. Monotonicity formalizes the intuition that coarser measurements extract less information, and positions Groenewold’s information gain as a resource-theoretic monotone for the informativeness of measurement procedures (Minagawa et al., 2022).

5. Thermodynamic and Operational Interpretations

Groenewold’s information gain has both a thermodynamic and an operational meaning:

  • Thermodynamic: The net reduction in (spectral) entropy quantifies the maximal work extractable via feedback control, in units of kTln2kT\ln2, when an ensemble is processed by the measurement instrument and the post-measurement states are used for further separation or mixing cycles. In this sense, information gain operationalizes the link between information-theoretic and thermodynamic descriptions of measurement, capturing the constraints imposed by the second law on information extraction and feedback (Minagawa et al., 2022).
  • Operational: In quantum information theory, the quantity I(R;X)σ=H(ρA)xp(x)H(ρAx)I(R;X)_\sigma = H(\rho_A) - \sum_x p(x) H(\rho_{A'}^x) determines the optimal rate in measurement compression tasks. When G(ρ,{Nx})0G(\rho,\{\mathcal{N}^x\})\approx 0, measurement outcomes can be simulated without classical communication, as the receiver can prepare ρn\rho^{\otimes n} locally (Buscemi et al., 2016). With quantum side information, small information gain implies similarly trivial communication rates for simulating the measurement and its reversal.

6. Applications and Generalizations

Groenewold’s information gain applies to a wide spectrum of informational and physical tasks:

  • Quantum measurements: For projective measurements in the eigenbasis of ρ\rho, all entropy can be viewed as information extracted, i.e., IG(ρ,{Pi})=S(ρ)I_G(\rho,\{P_i\}) = S(\rho). For general POVMs and measurement instruments, it quantifies the trade-off between state disturbance and information extraction.
  • Resource theories: In both quantum and more general process theories, the monotonicity and universality properties allow IG(ρ,s)I_G(\rho,s) to serve as a resource monotone, analyzing the “power” of measurement strategies and their transformations by classical post-processing.
  • Beyond quantum theory: The structure extends to all operational frameworks with weak spectrality and pure-state reversibility, making Groenewold information gain a pivotal concept for analyzing measurement and uncertainty in generalized probabilistic theories (Minagawa et al., 2022).
  • Approximate error correction: Small information gain characterizes situations where the measurement can be approximately reversed, linking measurement disturbance, entropy inequalities, and quantum error correction (Buscemi et al., 2016).

7. Limitations and Modern Perspective

While Groenewold’s original entropy reduction can be negative for inefficient (non-repeatable) instruments, the identification with quantum mutual information recovers non-negativity for efficient measurements. The modern perspective sharpens the operational content by providing quantitative reversibility bounds and by positioning information gain as a universal instrument monotone grounded in thermodynamics and resource-theoretic paradigms.

A plausible implication is that Groenewold’s information gain not only characterizes the informativeness of quantum and probabilistic measurement but also serves as a bridge between foundational principles in quantum theory, thermodynamics, and general probabilistic models, underpinning results in measurement compression, feedback control, and error correction (Buscemi et al., 2016, Minagawa et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Groenewold's Information Gain.