Self-Similar Variables in Dynamical Systems
- Self-similar variables are non-dimensional representations that capture scaling invariance, reducing PDEs to simpler ODEs or steady forms.
- They are applied across deterministic, stochastic, and data-driven models, enabling universal scaling functions and facilitating parameter estimation.
- SSV methods support advances in machine learning, astrophysics, and finance by simplifying complex systems into predictive, scalable frameworks.
Self-similar variables (SSV) provide a rigorous framework for representing solutions to dynamical systems, equations, or stochastic processes that exhibit invariance under scaling of space, time, or other independent variables. The SSV formalism harnesses scaling symmetries to reduce the effective dimensionality of a system, yielding universal scaling functions or profiles whose evolution is governed by reduced forms (such as ODEs or steady PDEs) that encode the underlying physical or probabilistic scaling structure. Self-similar variables underpin a broad range of developments in mathematical physics, probability, data-driven discovery, and applied modeling.
1. Mathematical Foundations of Self-Similar Variables
A function, field, or process is said to possess self-similarity if it can be represented in the canonical scaling form: where is the spatial similarity exponent, is the amplitude (temporal) exponent, and is a scaling function depending only on the non-dimensional self-similar variable (Sekimoto et al., 2012). The exponents are often determined by dimensional analysis or directly from the invariances of the governing equations.
For example, in the heat equation , the self-similar form arises with and determined by constraint (e.g., mass conservation). The reduction to self-similar variables transforms the PDE into an ODE for , which can be analyzed for various regimes—yielding rapidly decaying, polynomially growing, or algebraic tail solutions depending on initial and boundary data (Sekimoto et al., 2012, Wang et al., 31 Jan 2026).
2. Generalizations: Multivariate and Stochastic Self-Similarity
The concept of self-similar variables generalizes to stochastic processes and multivariate settings. A multivariate process is called self-similar with exponent matrix if
with typically diagonal, but potentially non-scalar in coupled systems (Lucas et al., 2023). This leads to scaling relations for multiscale covariance/eigenstructures, enabling parameter estimation in real-world datasets such as multi-channel EEG, via wavelet spectral regressions.
For Gaussian processes, precise necessary and sufficient conditions for the existence of small-scale limits are captured through specific scaling forms of the covariance: with function encoding further scale behavior, and the small-scale limits yielding fractional Brownian motion (Skorniakov, 2017).
3. Self-Similar Variables in PDEs and Dynamical Systems
Self-similar variable representations are foundational in both linear and nonlinear PDEs. The SSV ansatz is systematically derived by introducing non-dimensional combinations of independent variables that absorb scaling, yielding reduction of the original PDE system:
- Diffusion/Parabolic PDEs: SSV provides scaling functions for both late-time (Gaussian-like) and early-time (algebraically growing) transients, and further classifies long-range algebraic tails (Sekimoto et al., 2012).
- Kinetic Models: In multi-population or transport-dominated systems (e.g., the Fradkov model for grain growth), the SSV framework yields an infinite, coupled system for self-similar profiles, incorporating dimensional constraints and topological couplings (Herrmann et al., 2011).
- Hydrodynamic Flows and Cosmology: Using SSV, all Newtonian analogs of homogeneous isotropic Friedmann dust universes (k = 0, ±1) are derived as self-similar solutions to the spherically symmetric fluid equations, classifying universes according to the scaling index (Sanyal et al., 2021).
- Shock formation in nonlinear hyperbolic PDEs: Local geometric modulation and SSV yield asymptotic self-similar blow-up profiles (e.g., for 2D isentropic Euler, the 2D self-similar Burgers equation profiles describe emergent singularity structure) (Su, 2023).
- Nonlinear Wave and Rogue Wave Statistics: SSV-driven reduction reveals universal parabolic event profiles and scaling laws for the probability density of extremes in systems governed by the nonlinear Schrödinger equation (Liang et al., 2019).
4. Probabilistic and Statistical Extensions: Random Self-Similarity
The "casual stability" framework generalizes classical notions of stability by admitting normalization by random variables or transformations at each aggregation level:
- Additive Systems: A law is casually stable if, under normalized sums of independent copies (constructed via characteristic function-based normalizations), the overall law is invariant. This generalizes infinite divisibility and encompasses strictly stable, tempered stable, and other leptokurtic laws (Klebanov et al., 2014).
- Multiplicative, Minimum/Maximum, and Random-Element Systems: The SSV notion is extended using Mellin transforms, survival functions, and random counting, enabling the construction of self-similar laws in multiplicative cascades, reliability models, insurance risk, and other contexts with variable system sizes.
- Discrete and Continuous Examples: Examples include strictly α-stable laws, log-normal, double Pareto, Weibull, and their discrete analogues, each uniquely characterized within the casual stability/SSV paradigm.
These statistical SSV concepts underpin advances in heavy-tailed modeling in finance, reliability engineering, and beyond (Klebanov et al., 2014).
5. Data-Driven Extraction and Estimation of Self-Similar Variables
Modern applications often lack explicit governing equations or clear scaling laws. Recent algorithmic advances enable the extraction of self-similar variables directly from data:
- Optimization-based Variable Collapse: An algorithmic pipeline searches for transformations (dilations, translations, composition) that best collapse a temporal or spatial dataset onto a universal curve, quantified via norm-based collapse error and refined using mean-regularization (Bempedelis et al., 2024).
- Symbolic Regression of Scaling Laws: Discovered variable mappings are represented as interpretable power laws or composite functions using symbolic regression subject to physical dimension constraints.
- Applications: The methodology recovers known SSV expressions in classical problems (Blasius boundary layer, Burgers equation, turbulent wakes, cavity collapse) and generates new insight for underdetermined systems (e.g., grid turbulence). Robustness to experimental noise and multi-scale phenomena is demonstrated.
These developments provide critical tools for self-similarity analysis in data-rich, analytically intractable contexts (Bempedelis et al., 2024).
6. Applications and Methodological Innovations
SSV and their generalizations are prominent in:
- Renormalization and Perturbation Theory: The self-similar approximant method uses SSV-inspired RG fixed points and control variables to extrapolate divergent series in nonlinear problems, yielding predictive approximants that converge regularly and often reproduce known asymptotics or exact solutions on the basis of short series (Yukalov et al., 17 May 2025).
- Machine Learning and Neural Operators: Training neural operators in SSV coordinates (e.g., via spatial contraction and amplitude rescaling) introduces a mathematically motivated inductive bias, improving long-time extrapolation, stability, and shape fidelity for heat-based PDEs and related systems (Wang et al., 31 Jan 2026).
- Astrophysical and Cosmological Scaling: Discrete SSV mappings explain cross-scale analogies (e.g., atomic ↔ stellar variable stars) by power-law rescaling of fundamental physical parameters, yielding accurate predictions across orders of magnitude in mass, length, and time (Oldershaw, 2009).
7. Limitations, Open Questions, and Ongoing Research Directions
While the SSV framework is highly general, key open challenges include:
- Multi-scale and coupled variable discovery: Current algorithms often search for single-scale collapses; true multi-scale (or fractal) self-similarity requires further generalizations (Bempedelis et al., 2024).
- Classification of normalization families: For random self-similarity, a full taxonomy of commutative normalization families is lacking, with connections to complex analytic theory and fractals remaining undeveloped (Klebanov et al., 2014).
- Tauberian and tail-classification theorems: The relation between SSV form, tail heaviness, and limit behavior is more intricate than for classical stable laws (Klebanov et al., 2014).
- Multi-dimensional and Banach-space extensions: Systematic theory for SSV in these settings—in both classical and random normalization regimes—requires further study, particularly in applications with high-dimensional dependencies (Klebanov et al., 2014, Lucas et al., 2023).
Notwithstanding these challenges, SSV remains a foundational unifying concept in modern analysis, stochastic modeling, and data-driven discovery.