Disturbance-Based Discretization
- Disturbance-based discretization is a technique where the modeling and mitigation of disturbances directly inform the design of discrete representations across various domains.
- It optimizes tradeoffs between reliability, efficiency, and stability in areas such as information theory, control systems, quantum foundations, numerical analysis, and machine learning.
- The methodology leverages structured parameterizations and adaptive optimization to balance performance constraints with disturbance effects in practical applications.
Disturbance-based discretization refers to the family of methodologies in information theory, control, quantum foundations, computational mathematics, and machine learning in which the design and analysis of discrete representations or approximations are fundamentally informed by the presence, modeling, or mitigation of disturbances—interpreted as noise, leakage, non-idealities, or external influences. Across a range of domains, disturbance-based discretization impacts achievable system performance, architecture of codes and controllers, characterizations of robustness, and the structure of mathematical bounds. Methods under this term leverage both the nature and magnitude of disturbances to guide or constrain the discretization process, yielding representations or mechanisms that optimize tradeoffs among reliability, efficiency, privacy, and stability.
1. Disturbance-based Discretization in Information Theory
When communication over a channel is subject to disturbance constraints—such as limiting information leakage to unintended recipients—the rate at which information can be discretely transmitted must explicitly balance between reliable delivery and minimal disturbance.
A canonical framework is the “rate-disturbance region” for discrete memoryless channels with disturbance constraints (DMC–1–DC) (Bandemer et al., 2011). The achievable set of rate-disturbance pairs (R, Rd), given a joint input distribution p(u, x), is
where U is an auxiliary variable (“cloud center”) in a layered superposition or rate-splitting code. The achievable region quantifies the fundamental tradeoff between the reliable rate to receiver Y and the disturbance at observer Z. For the two-disturbance case (to Z₁ and Z₂), inner and outer bounds involve multiple mutual information inequalities and entropy expressions (see Theorems 3–5).
Crucially, the optimal encoding scheme is identical to the Han–Kobayashi scheme for the two-user interference channel—an architecture characterized by splitting the message, superposition coding, and in multi-receiver cases, Marton coding (Bandemer et al., 2011). These insights extend to vector Gaussian (MIMO) channels, preserving the core structure of the tradeoff by statements in terms of log-determinants. The methodology is naturally ported to discrete memoryless interference channels and motivates extensions of Han–Kobayashi-like coding to more than two user pairs. The “disturbance” thus enters as an explicit factor modulating or constraining what forms of discrete coding (and their layered structure) are optimal.
2. Disturbance-based Discretization in Control and Robustness
Robust control design for systems under unknown-but-bounded disturbances relies on disturbance-based discretization to synthesize feasible state-space or input sets whose induced output reachability matches constraints as closely as possible. In discrete-time linear systems, the disturbance set (typically a polytope denoted 𝒲) is not a given but is computed—often via optimization constrained by robust invariance conditions (Mulagaleti et al., 2021, Mulagaleti et al., 2023). This reverse-engineering paradigm leads to a bilevel problem:
- The inner problem computes a minimal robust positive invariant (mRPI) set for a candidate disturbance set by solving a fixed-point equation on the support functions of the polytopes.
- The outer problem minimizes the Hausdorff distance (with inclusion constraints) between the output-constraint set 𝒴 and the image of the invariant set under the system output map.
Parameterizing the disturbance and invariant sets (by, e.g., number and direction of hyperplanes) “discretizes” the uncertainty—which directly influences the size and shape of feasible output sets and thus the achievable control performance. The connection to reduced-order reference governors further grounds this process in a hierarchy of disturbance-induced discretization and constraint satisfaction.
In infinite-dimensional systems, such as boundary-controlled hyperbolic PDEs with quantized (discrete) measurements, the quantization error acts as a discrete disturbance. A Lyapunov-based disturbance-to-state estimate yields a robust bound:
where d encapsulates the disturbance from quantization. Control design then exploits this bound to select quantization and discretization parameters that guarantee practical stability and ultimate boundedness (Tanwani et al., 2017). This methodology constitutes a blueprint for incorporating discretization errors into the robust control synthesis, effectively absorbing their effect via system attenuation.
3. Disturbance-based Discretization in Quantum Foundations and Coherence
In theoretical physics, disturbance-based discretization distinguishes physical theories according to the interplay between measurement disturbance and the information gained. A pivotal result is that any discrete (polytope, i.e., finitely many pure states) theory that satisfies the postulate “no information gain implies no disturbance” is necessarily classical. If the convex state space is non-simplicial (non-classical discrete), it is impossible to define a measurement transformation T,
that leaves all “certain” states undisturbed; a strictly positive lower bound ε exists for the disturbance (Pfister et al., 2012). This prevents non-classical discrete theories (such as the PR-box or discrete approximations to the quantum Bloch ball) from being possible without measurable disturbance—even when only approximate non-disturbance is required.
For quantum macroscopicity, a disturbance-based measure of coherence is defined via the distance between a state and its post–coarse-grained–measurement state:
where Φ represents the measurement-induced channel with finite precision σ. This measure is upper-bounded by an expression involving the Wigner–Yanase–Dyson skew information,
thus relating disturbance from measurement (discretization) to quantum macroscopic coherence and fragility to decoherence (Kwon et al., 2016).
4. Disturbance-based Discretization in Numerical Analysis and Subspace Learning
In computational mathematics, adaptive discretization schemes are driven by the locations and statistics of disturbances in the coefficients of PDEs—spatial discontinuities modeled by stochastic jumps or random fields. For time-dependent advection-diffusion with discontinuous random coefficients, a pathwise sample-adapted finite element mesh is constructed to resolve random interfaces (jumps), using, e.g., a Matérn-type covariance operator for the Gaussian component and generalized inverse Gaussian sampling for jumps (Barth et al., 2019). Convergence analyses show that when the disturbance (jump height) is properly approximated and the mesh adapts to the resulting discontinuity, substantially improved convergence rates are achieved compared to non-adapted discretizations.
In machine learning, disturbance-aware discretization is embodied in kernel methods for subspace classification: the Disturbance Grassmann kernel (DG kernel) averages kernel evaluations across all possible disturbed subspace instances (either via pseudo-Gaussian perturbations of basis vectors or fluctuations modeled as Dirichlet randomizations of singular values), leading to classifiers robust to the instability inherent in subspace representations (Hong et al., 2018). These techniques formalize the “lifting” of a discrete kernel by explicit noise integration in the dual.
5. Disturbance-based Discretization in Data-driven and Autoencoder Systems
Modern autoencoder-based error-correcting code design for non-symbolic channels, such as DNA storage channels with insertion, deletion, and substitution (IDS) errors, requires bridging the gap between continuous (softmax/probabilistic) codeword representations during training and discrete sequences needed at inference. Disturbance-based discretization is achieved by adding an entropy-based penalty (disturbance) term to the training objective:
Minimizing this term drives each code vector towards a one-hot encoding without detaching gradient flow, facilitating test-time application of non-differentiable discretization (e.g., ), and reducing the domain gap (Guo et al., 10 Jul 2024). When coupled with a differentiable channel simulation (e.g., a transformer-based model), the autoencoder can jointly optimize for robust code generation tailored to the specific disturbance structure of the channel, yielding low nucleobase error rates in moderately high coderate regimes.
6. Disturbance-based Discretization in Reinforcement Learning and Predictive Control
Disturbance-aware discretization can also be realized within RL-based control of systems subject to significant, impulsive disturbances. In UAS (unmanned aerial vehicles) subjected to impulsive “recoil” disturbances, supplying the RL agent with an artificial trigger signal discretizes the disturbance event in time, segmenting the episode into “pre-disturbance” and “post-disturbance” phases (Chakraborty et al., 10 Apr 2025). Policies trained with both the trigger and the disturbance (predictive IT-policy) learn to anticipate and counteract the impulsive disturbance, outperforming baseline and purely reactive strategies in position deviation and control effort metrics. The presence of the predictive cue allows segmentation of the control into discrete phases, enabling the agent to exploit the structure of the disturbance in its policy—effectively a temporal form of disturbance-based discretization.
7. Broader Implications and Methodological Commonalities
Across these fields, disturbance-based discretization provides a systematic way to harness or suppress the effects of disturbances in discrete representations. Core architectural features include:
- Parameterizations where the “discreteness” of codes, sets, or kernels is coupled to explicit modeling or inclusion of disturbance effects.
- Control and tuning of the tradeoff between reliability (rate, accuracy, or stability) and disturbance (leakage, error, or quantization).
- Optimization methods that select discretization strategies (e.g., mesh, codeword entropy, polytopes) to minimize the impact of disturbances or tailor performance in the presence of bounded or stochastic uncertainty.
The methodology thus unifies a wide spectrum of techniques under a common principle: disturbance is never a mere afterthought but a central organizing principle guiding discretization. The result is a class of discretization schemes, estimators, and controllers that are robust, less conservative, and efficiently aligned with real-world performance objectives across communication, computation, control, and quantum systems.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free