Papers
Topics
Authors
Recent
2000 character limit reached

Local Entropy (LocalE): Theory & Applications

Updated 26 November 2025
  • Local Entropy (LocalE) is a framework that quantifies the complexity within localized neighborhoods, capturing fine-grained structure and regularity across diverse systems.
  • It is applied in areas such as Riemannian geometry, deep learning optimization, constraint satisfaction, dynamical systems, thermodynamics, and algebra, offering analytical insights and robust regularization.
  • LocalE enhances both theoretical and practical developments by supporting non-collapsing results in Ricci flow, promoting wide, flat minima in neural networks, and facilitating scalable algorithms in combinatorial optimization.

Local Entropy (LocalE) encompasses a family of concepts, methodologies, and variational frameworks that localize entropy in either geometric, thermodynamic, algebraic, information-theoretic, or machine learning settings. The core idea is to quantify the entropy of a system, measure, or function not globally, but within local neighborhoods, subspaces, or domains, thereby capturing local structure, complexity, or regularity. This localization can be spatial (e.g., Riemannian domains), in parameter space (e.g., neural network weights), with respect to measure-theoretic or algorithmic neighborhoods, or dynamical evolution. The unifying property is the definition of a local entropy functional, capable of providing refined regularization, structural insight, or analytically tractable control in a variety of advanced mathematical, physical, and computational contexts.

1. Local Entropy in Riemannian Geometry and Ricci Flow

Bing Wang's formalism for local entropy on a Riemannian manifold provides functionals μ(Ω,g,τ)\boldsymbol\mu(\Omega,g,\tau) and ν(Ω,g,τ)\boldsymbol\nu(\Omega,g,\tau), where ΩM\Omega\subset M is a bounded domain, gg is a Riemannian metric, and τ>0\tau>0 is a scale parameter. The key definition is: S(Ω)={φW01,2(Ω)φ0,Ωφ2dv=1}\mathscr S(\Omega) = \{\varphi\in W_0^{1,2}(\Omega)\mid\varphi\geq 0,\, \int_\Omega \varphi^2\,dv=1\}

W(a)(Ω,g,φ,τ)=mm2ln(4πτ)+Ω{τ(aφ2+4φ2)2φ2lnφ}dv\mathcal W^{(\boldsymbol a)}(\Omega,g,\varphi,\tau) = -m-\frac m2\ln(4\pi\tau) + \int_\Omega\bigg\{\tau(\boldsymbol a\varphi^2+4|\nabla\varphi|^2) - 2\varphi^2\ln\varphi\bigg\} dv

μ(a)(Ω,g,τ)=infφS(Ω)W(a)(Ω,g,φ,τ)\boldsymbol\mu^{(\boldsymbol a)}(\Omega,g,\tau) = \inf_{\varphi\in\mathscr S(\Omega)}\mathcal W^{(\boldsymbol a)}(\Omega,g,\varphi,\tau)

Perelman's monotonicity formula is thus localized: μ\boldsymbol\mu and ν\boldsymbol\nu inherit almost all rigidity and monotonicity properties of their global analogues but are sensitive only to the chosen domain Ω\Omega (Wang, 2017).

Principal properties:

  • Monotonicity under inclusion: Ω1Ω2\Omega_1\subsetneq\Omega_2 implies μ(a)(Ω1,τ)>μ(a)(Ω2,τ)\boldsymbol\mu^{(\boldsymbol a)}(\Omega_1,\tau) > \boldsymbol\mu^{(\boldsymbol a)}(\Omega_2,\tau), ν(a)(Ω1,τ)ν(a)(Ω2,τ)\boldsymbol\nu^{(\boldsymbol a)}(\Omega_1,\tau) \ge \boldsymbol\nu^{(\boldsymbol a)}(\Omega_2,\tau)
  • Comparison to volume ratios under scalar or Ricci curvature bounds
  • Continuity under exhaustion and boundary regularity properties for minimizers
  • Non-positivity: ν(a)(Ω,τ)0\boldsymbol\nu^{(\boldsymbol a)}(\Omega,\tau)\leq 0
  • Almost-monotonicity under Ricci flow: nearly preserved under mild curvature control.

Applications include no-local-collapsing and pseudo-locality theorems, where bounds on local entropy ensure uniform non-collapsing of volume ratios in Ricci flow, under far weaker assumptions compared to Perelman's global results. A significant implication is the derivation of Gromov–Hausdorff convergence and geometric compactification for Kähler–Ricci flow on minimal projective manifolds of general type (Wang, 2017, Wang, 2020).

2. Local Entropy in Deep Learning and Optimization

Local entropy functionals function as architecture-aware regularizers in deep learning. The canonical LocalE loss is

Fτ(x)=logRdexp(f(y))φx,τ(y)dyF_\tau(x) = -\log \int_{\mathbb R^d} \exp\left(-f(y)\right) \,\varphi_{x,\tau}(y)\,dy

where ff is the base loss, φx,τ\varphi_{x,\tau} is a Gaussian kernel centered at xx of scale τ\tau (Trillos et al., 2019, Musso, 2021). The optimization of FτF_\tau promotes minima that are wide and flat in parameter space, thus improving robustness and generalization.

Anisotropic and Partial Local Entropy

To exploit anisotropy in weight landscapes, partial local entropy focuses smoothing on selected weight subspaces. For a mask U{0,1}NU\in\{0,1\}^N, local entropy is computed with respect to perturbations only in coordinates where Ui=1U_i=1, yielding strong empirical performance especially when applied to deeper layers or critical subspaces of deep neural networks (Musso, 2020, Musso, 2021).

Key empirical findings:

  • Single-layer (especially deep layer) smoothing achieves better performance than isotropic (all-layers) smoothing.
  • Isotropic smoothing can degrade accuracy if applied too strongly.
  • In convolutional architectures, smoothing fully connected “head” layers improves early and late generalization; smoothing convolutional layers can be detrimental.
  • Layer-wise “temperature” (variances of gradients) decays with a universal power law, indicating a common “cooling” regime.
  • Scoping schedules, in which the smoothing parameter decays over training, yield performance comparable to state-of-the-art initializations.

Practical optimization uses two-step iterative algorithms (Monte Carlo posterior updates, moment-matching), either via SGLD or importance sampling (Trillos et al., 2019). The approach can be integrated with modern optimizers (SGD, Adam) and requires minimal computational overhead relative to backpropagation.

3. Local Entropy in Constraint Satisfaction and Large Deviation Theory

In combinatorial optimization and constraint satisfaction, local entropy quantifies the density of solutions in a neighborhood (e.g., a Hamming ball) around a reference configuration x~\tilde x. The local entropy is

SI(S)=F(S,)\mathscr S_I(S) = -\mathcal F(S,\infty)

with F(S,y)\mathcal F(S,y) the (quenched) free energy, and SS the overlap with x~\tilde x (Baldassi et al., 2015). For random CSPs, analysis via replica and cavity methods reveals regions in the solution space densely packed with solutions, which can be found via “entropy-driven Monte Carlo” (EdMC), a Metropolis search preferring configurations with maximal local entropy.

Notable outcomes:

  • EdMC is highly effective in regions where standard simulated annealing fails (due to rough, glassy energy landscapes).
  • Large deviation and replica-symmetry breaking analyses explain the emergence of ultra-dense solution clusters.
  • Algorithmic enhancements (scoping of the locality parameter, collective flips) further accelerate exploration of high-entropy regions.

This approach yields scalable algorithms for hard problems in combinatorial optimization and provides insight into the geometry of solution spaces.

4. Local Entropy in Dynamical Systems and Thermodynamical Formalism

In smooth and symbolic dynamics, local topological entropy and its variants (e.g., neutralized entropy, translocal entropy) measure the exponential complexity of orbits or measure-theoretic neighborhoods at a point (Bis et al., 5 Dec 2024, Ovadia et al., 2023). For a continuous map f:XXf:X\to X, the Ye–Zhang local entropy is

htop(x)=limε0inf{lim supn1nlogS(n,ε,K):KX closed, xK}h_{\mathrm{top}}(x) = \lim_{\varepsilon\to 0}\inf \Bigl\{\, \limsup_{n\to\infty}\frac1n\log S(n,\varepsilon,K) :\,K\subset X\text{ closed, }x\in K \Bigr\}

where S(n,ε,K)S(n,\varepsilon,K) counts (n,ε)(n,\varepsilon)-separated orbits in KK. In many classes of systems, this function is constant, motivating the introduction of “translocal” entropy Tw(z)T_w(z) which probes exponential shrinking neighborhoods, capturing transient effects and relating to local Lyapunov exponents: Tw(z)=(1wλˉ(z))htop(f)T_w(z) = \left(1-\frac{w}{\bar\lambda(z)}\right) h_{\mathrm{top}}(f) for one-dimensional expanding maps, where λˉ(z)\bar\lambda(z) is the upper Lyapunov exponent (Bis et al., 5 Dec 2024).

Measure-theoretic local entropy, such as Brin–Katok entropy and neutralized entropy, furnish pointwise rates of measure concentration in dynamically generated balls, and under ergodicity these quantities coincide almost everywhere (Ovadia et al., 2023).

These structures have direct applications for computing local dimension, verifying dimension formulas (Ledrappier–Young type), and establishing variational principles connecting local and global invariants (Ma et al., 2011, Sahlsten et al., 2011).

5. Local Entropy in Thermodynamic and Quantum Systems

Local entropy allows direct localization of classical and quantum thermodynamic quantities. In non-equilibrium independent-fermion systems, the local entropy at point xx is

S(x)=kBdωg(ω;x)[f(ω;x)lnf(ω;x)+(1f(ω;x))ln(1f(ω;x))]S(x) = -k_B \int d\omega\,g(\omega;x) \big[ f(\omega;x)\ln f(\omega;x) + (1-f(\omega;x))\ln(1-f(\omega;x)) \big]

where g(ω;x)g(\omega;x) is the local density of states, f(ω;x)f(\omega;x) the occupation (Stafford et al., 2016).

Key findings:

  • Local temperature and chemical potential are definable as entropy derivatives only near local equilibrium: 1/T(x)=S/EN1/T(x) = \partial S/\partial E|_N, μ(x)/T(x)=S/NE\mu(x)/T(x) = -\partial S/\partial N|_E.
  • The first law applies as an inequality far from equilibrium: S(x)Seq(x)S(x)\leq S_\text{eq}(x).
  • The second law (maximal entropy principle) holds pointwise.
  • The third law is preserved: S(x)0S(x)\to 0 as T(x)0T(x)\to 0.

In nonequilibrium stochastic systems, local Shannon entropy is constructed by trajectory-counting ensembles: Slocal(x,t)=kBlnΩ(Λx)Ω(Λ)S_{\mathrm{local}}(x,t) = -k_B \ln \frac{\Omega(\Lambda^x)}{\Omega(\Lambda)} where Λx\Lambda^x is the collection of trajectories ending at xx at time tt (Jinwoo et al., 2014). This yields a strictly local dynamical entropy, generalizing classical and fluctuation theorems (e.g., Crooks and Jarzynski equalities) to the pointwise level.

Quantum field theoretic refinements relate the curvature (second derivative) of relative entropy for localized charged states to local energy densities, with direct implications for Bekenstein bounds, Quantum Null Energy Condition, and operator algebraic ANEC (Longo, 2018).

6. Local Entropy in Algebra and Homological Structures

In commutative algebra, local entropy quantifies the exponential growth of complexity under endomorphisms of local rings. For a Noetherian local ring $(R,\m)$ and endomorphism ϕ\phi, the local entropy is defined as

$h_{\mathrm{loc}}(\phi) = \lim_{n\to\infty} \frac1n\,\log\operatorname{length}_R(R/\phi^n(\q)R)$

for any $\m$-primary ideal $\q$ (Majidi-Zolbanin et al., 2016). This quantity is well-defined and less than or equal to the associated category-theoretic entropy.

Crucial results:

  • If RR is regular, hloc(ϕ)=hcat(ϕ)h_{\mathrm{loc}}(\phi) = h_{\mathrm{cat}}(\phi).
  • For flat Cohen–Macaulay extensions, hloch_{\mathrm{loc}} is additive.
  • In positive characteristic, for the Frobenius morphism, hloc=dimRlogph_{\mathrm{loc}} = \dim R\cdot\log p.

There is also an asymptotic Euler–characteristic formula for homomorphic images of regular local rings, connecting local entropy with the growth rate of graded module invariants.

7. Applications, Extensions, and Future Directions

Local entropy is foundational for non-collapsing and pseudo-locality results in geometric flows, robust regularization schemes in deep learning, efficient solution sampling in CSPs, dimension and multifractal analysis in geometric measure theory, and pointwise thermodynamic laws in both classical and quantum regimes.

Directions for future work include:

  • Adaptive anisotropic smoothing in machine learning, leveraging dynamic estimates of local curvature or temperature (Musso, 2020, Musso, 2021).
  • Further refinement of entropy functionals for singular spaces, e.g., in Kähler geometry and geometric analytic singularity theory.
  • More precise variational principles for random dynamical systems and local entropy in stochastic processes (Ma et al., 2011, Bis et al., 5 Dec 2024).
  • Algorithmic exploitation of local entropy estimates in high-dimensional and nonconvex optimization, especially for large-scale neural architectures.

Local entropy provides a versatile, adaptable, and unifying principle across geometric analysis, information theory, statistical mechanics, machine learning, and algebra, enabling localized control and analysis of complexity, regularization, and invariant measures.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Local Entropy (LocalE).