Papers
Topics
Authors
Recent
2000 character limit reached

Intrinsic Dimension Estimation

Updated 3 December 2025
  • Intrinsic dimension estimation is the process of determining the smallest number of latent degrees of freedom needed to represent data, providing insights into its geometric and statistical structure.
  • A variety of methodologies—including nearest-neighbor, fractal, PCA-based, and deep learning approaches—offer trade-offs in accuracy, scalability, and noise robustness.
  • This estimation underpins applications in manifold learning, unsupervised feature selection, data compression, and clustering, while supporting both continuous and discrete datasets.

Intrinsic dimension estimation concerns determining the minimal number of latent degrees of freedom that explain the variability in a dataset, even when it is embedded in a much higher-dimensional ambient space. This concept is central to manifold learning, dimensionality reduction, signal processing, and understanding the geometric structure of real-world datasets, both continuous and discrete, with implications spanning unsupervised learning, feature selection, data compression, clustering, and generative modeling.

1. Foundations of Intrinsic Dimension

The intrinsic dimension (ID) of a dataset is defined as the smallest integer dd such that the data can be described—locally or globally—by a dd-dimensional coordinate system, typically assuming (locally) that the data are sampled from a dd-dimensional manifold within a higher-dimensional ambient space RD\mathbb{R}^D (dDd\ll D). In formal terms, dd is the minimal number of variables required to represent the probability mass or geometry of the data without significant loss of information (Denti, 2021). The ID guides the choice of the target dimension in manifold learning algorithms (PCA, Isomap, UMAP, t-SNE), impacts generalization and capacity control in machine learning, and provides interpretable complexity measures in scientific data analysis.

Different notions of dimension are relevant depending on context:

  • Global ID: Single value describing the dimension of the overall (possibly multifractal) data support.
  • Local ID: Dimension estimated within local neighborhoods, allowing for spatial heterogeneity (e.g., for multimodal or composite datasets).
  • Fractal/Generalized dimensions: Real-valued or scale-dependent dimensions used to characterize fractal or multifractal sets (e.g., \ell-box-counting, correlation, Morisita, information, or Rényi dimensions).

2. Methodologies for Intrinsic Dimension Estimation

The literature offers a wide array of estimators, distinguished by the geometric/statistical quantity whose scaling yields an estimate of dd, as well as their computational feasibility and robustness. Principal classes include:

A. Nearest-Neighbor and Volume-Based Estimators

These rely on the scaling of point counts or distances in balls of varying radii:

  • Maximum Likelihood Estimators (MLEs): Established by Levina & Bickel, these estimators model the counts of kk-nearest neighbors using Poisson assumptions, yielding

d^ML(x)=(1k1j=1k1logTk(x)Tj(x))1\hat d_{ML}(x) = \left(\frac{1}{k-1}\sum_{j=1}^{k-1} \log\frac{T_k(x)}{T_j(x)}\right)^{-1}

for each xx, with global or local averaging (Gupta et al., 2012, Gomtsyan et al., 2019). Corrections for curvature and non-uniform sampling yield the "GeoMLE" estimator, with polynomial regression on kk to capture deviations from the flat, uniform model (Gomtsyan et al., 2019).

  • 2NN and TWO-NN: Based solely on the ratio of the first two nearest-neighbor distances, resulting in a closed-form estimator robust to sampling density (Denti, 2021).
  • Adaptive Binomial (ABIDE): Selects at each point the neighborhood size at which the local density appears statistically homogeneous, thereby avoiding both the small-scale noise inflation (when kk is too small) and large-scale curvature bias (Noia et al., 24 May 2024).

B. Distribution-Based and Graph-Based Approaches

  • Random Connection Models: Use adjacency matrices at two scales (neighborhood graphs). The ratio of connection probabilities at scales ε\varepsilon and 2ε2\varepsilon provides an estimator, d=logp^2ε,1logp^ε,1log2d = \frac{\log \hat{p}_{2\varepsilon,1} - \log \hat{p}_{\varepsilon,1}}{\log 2} (Serra et al., 2017).
  • Curvature Profile Matching: Uses discrete analogues of sectional curvature (Gromov product and scaling of triangle "intersection radii") and measures the Earth-Mover’s (Wasserstein-1) distance between curvature profiles across embedding dimensions; the minimizer yields the estimated ID (Beylier et al., 16 Sep 2025).
  • Wasserstein Contraction: Relates how the Wasserstein distance between independent empirical distributions contracts as n1/dn^{-1/d}; by quantifying contraction across sample sizes via W1W_1, one solves for dd (Block et al., 2021).

C. Fractal, Box-Counting, and Multipoint Index Methods

  • Correlation Integral & Correlation Dimension: Counts point pairs within distance rr and fits the scaling C(r)rdC(r) \sim r^d for small rr (the Grassberger–Procaccia approach).
  • Multipoint Morisita Index (MmM_m): Uses grid partitioning and factorial moments (rather than powers) of cell counts to define the scaling exponent at order mm; MmM_m is robust for larger mm and sample sizes, particularly at m=2m=2 (Golay et al., 2014).
  • Full Correlation Integral (FCI): Fits the entire neighbor-count curve using the known distribution for points on a dd-sphere, overcoming the curse of dimensionality for severely undersampled data (Erba et al., 2019).

D. PCA and Matrix-Projection Methods

  • Local and Global PCA: Determines dd as the number of significant singular values, with local variants (e.g., C-PCA) constructing minimal covers with neighborhood noise filtering (Fan et al., 2010).
  • Projected-Variance Chebyshev Methods: Estimate the cumulative variance explained without full eigendecomposition, using stochastic trace estimators and kernel polynomial approximation to count eigenvalues above thresholds (Özçoban et al., 12 Mar 2025).

E. Angle- and Connectivity-Based Estimators

  • Angle-Based ID (ABID): Uses the distribution of angles between neighbors in high-dimensional space, exploiting the fact that the mean squared cosine for random directions on Sd1S^{d-1} is $1/d$ (Thordsen et al., 2020).
  • Connectivity Factor (eDCF): Based on the number of occupied neighboring cells in a discretized grid, calibrated to theoretical values for dd-manifolds; provides robust estimates in the presence of high noise or fractal boundaries (Gupta et al., 18 Oct 2025).

F. Deep Learning and Hybrid Approaches

  • Intrinsic Dimension Estimating Autoencoders (IDEA): Neural autoencoders equipped with structured bottleneck layers ("CancelOut") and continuous loss monitoring that prune latent variables until reconstruction loss exceeds a threshold (Orioua et al., 12 Sep 2025).
  • Nearest Constrained Subspace Classifier (NCSC): Casts ID estimation as model selection in a subspace classifier; the ID is chosen as the dimension yielding maximal classification accuracy for local affine fits (Liao et al., 2020).

3. Discrete and Non-Euclidean Metric Spaces

Most classical methods assume continuous or Euclidean settings, but many domains—genomics, graphs, categorical surveys—require extensions:

  • I3D Discrete Estimator: For data in discrete metric spaces (e.g., Hamming, categorical features), I3D models neighbor counts using binomial statistics and employs Ehrhart polynomials to enumerate the number of lattice points in L1L^1 metric shells, enabling maximum-likelihood estimation of dd with explicit error quantification (Macocco et al., 2022).
  • Extension to Graph Topologies: Both I3D and curvature-profile techniques leverage discrete metrics and neighborhood graphs, enabling accurate ID assessment in inherently non-Euclidean or networked data.

4. Computational and Statistical Properties

Scalability varies widely:

Method Complexity Key Features
PCA, Participation Ratio O(min(N,D)D2)O(\min(N,D)D^2) Fast, linear only, noise-sensitive
Correlation integral, MADA O(N2)O(N^2) Fractal, nonlinear, slow for large N
MLE/2NN O(NlogN)O(N\log N) Accurate, robust, NN-based
Chebyshev projection (2503) O(ND)O(N D) No eigendecomp., scalable
eDCF, Morisita (grid) O(NEL+KM)O(N E L + K M) Parallelizable, handles noise
IDEA (autoencoder) O(Nl)O(N l) per epoch High accuracy, model-heavy

Fast matrix-vector trace/Chebyshev methods (Özçoban et al., 12 Mar 2025), approximate nearest-neighbor algorithms, and grid-bucketing render large-scale computation feasible.

Statistical guarantees depend on the estimator:

  • Consistency: Many estimators (e.g., MLE, Morisita, ABID, Wasserstein, I3D, ABIDE) are proven consistent under assumptions of local uniformity and sufficient sample sizes.
  • Variance and Bias: Signal-to-noise trade-offs and selection of optimal neighborhood size or scale are critical for minimum-variance unbiased estimation. Regularized MLE (Gupta et al., 2012) and ABIDE (Noia et al., 24 May 2024) offer variance-dampening innovations.
  • Sample Complexity: Wasserstein and graph-based estimators yield rates depending only on dd, not on ambient DD (Block et al., 2021). Analyses of variance scaling with O(1/N)O(1/N) or O(1/k2)O(1/k^2) are given for, e.g., TLE (Amsaleg et al., 2022).

5. Scale, Noise, and Heterogeneity

A. Scale Selection and Bias

Finite-sample and measurement error effects can cause ID to inflate at small scales (due to noise) or at large scales (due to curvature/topology). ABIDE adaptively selects the "sweet spot" scale per point by enforcing local density constancy via statistical testing, iteratively solving for the self-consistent scale and ID (Noia et al., 24 May 2024).

B. Handling Heterogeneity

Mixture models like HIDALGO, implemented in intRinsic, model heterogeneous ID by clustering data into subsets, each with its own dimension parameter, successfully identifying regions of different geometric complexity (Denti, 2021). GeoMLE also accommodates density and curvature variation by polynomial bias regression across kk (Gomtsyan et al., 2019).

C. Robustness to Noise and Fractals

ABID excels at non-manifolds, correctly estimating fractal dimensions in the presence of grid-structure and noise (Thordsen et al., 2020). eDCF provides strong exact-match recovery even at high noise for large NN (Gupta et al., 18 Oct 2025). Morisita’s M2M_2 estimator handles edge effects and high dimensionality, outperforming classical box-counting and correlation dimensions (Golay et al., 2014). The FCI method (multi-scale neighbor counts) is specifically designed for locally undersampled and strongly curved data (Erba et al., 2019).

6. Benchmarks, Empirical Comparisons, and Software

A. Benchmark Datasets

The scikit-dimension Python package aggregates 19 estimators and provides standardized comparisons across over 500 real and synthetic datasets (Bac et al., 2021). QuIIEst introduces a rigorous family of quantum-optically embedded manifolds (homogeneous spaces), testing IDEs against true topological dimensions—significantly increasing difficulty over classical toy benchmarks (Das et al., 1 Oct 2025). MLE, TWO-NN, and ABID emerge as robust baseline performers across manifold types and noise regimes, but specialized cases (non-manifolds; highly curved low-NN datasets) require careful method selection.

B. Empirical Results

For most synthetic and real-world tasks, nonlinear or angle-based (ABID, DANCo), regularized MLE, grid-based, or autoencoder-based estimators outperform linear/PCA estimators in recovering correct IDs or minimizing error (Bac et al., 2021, Orioua et al., 12 Sep 2025, Gupta et al., 18 Oct 2025). IDEA achieves exact ID recovery on nearly all tested manifolds, matching reconstruction error minima at the pruned latent dimensionality (Orioua et al., 12 Sep 2025).

C. Open-Source Ecosystem

  • scikit-dimension (Bac et al., 2021): Uniform API for PCA, fractal, MLE, expansion, and angle-based estimators in Python.
  • intRinsic (R) (Denti, 2021): Implements TWO-NN, GRIDE, and HIDALGO (homogeneous and heterogeneous ID).
  • DADApy: Implements I3D for discrete datasets (Macocco et al., 2022).

7. Challenges, Limitations, and Frontiers

Open problems and ongoing research focus on:

In summary, intrinsic dimension estimation encompasses a spectrum of theoretically grounded and empirically validated methodologies, addressing the challenges of geometric nonlinearity, heterogeneity, noise, discreteness, and high dimensionality. Advances continue on algorithmic, statistical, and application fronts, with software frameworks supporting reproducible benchmarking and integration into modern learning workflows.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Intrinsic Dimension Estimation.