Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 149 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 112 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

UMAP Projections of Intermediate Activations

Updated 20 October 2025
  • UMAP projections of intermediate activations are a method to embed high-dimensional neural network outputs into low dimensions, revealing latent structures and class separations.
  • They employ fuzzy simplicial sets, gradient-based optimization, and parametric extensions to preserve local connectivity and scale to large datasets.
  • These projection techniques are pivotal for model interpretation, detecting confounding features, and regularizing representations in deep learning research.

UMAP projections of intermediate activations refer to the application of the Uniform Manifold Approximation and Projection (UMAP) algorithm—and its parametric, scalable, and explainable variants—to the activations produced within neural networks (typically at hidden or penultimate layers). This approach provides geometric, topological, and statistical insights into the structure and semantics of learned representations, supporting model interpretability, regularization, visual analytics, and downstream processing. These projections are at the intersection of manifold learning, dimensionality reduction, and neural network interpretation.

1. Theoretical Foundation and Algorithmic Principles

UMAP, a neighbor graph–based manifold learning technique, operates on the premise that complex high-dimensional data (including neural activations) lie on a low-dimensional manifold. It first constructs a fuzzy simplicial set representing local connectivity in the original space and then seeks a low-dimensional embedding that preserves these relationships. The core elements include:

  1. Construction of High-Dimensional Probabilities: For activations X=[x1,,xn]Rn×dX = [x_1, \ldots, x_n] \in \mathbb{R}^{n \times d}, UMAP computes for each xix_i a probability pji=exp((xixj2ρi)/σi)p_{j|i} = \exp(-(\|x_i - x_j\|_2 - \rho_i)/\sigma_i) for neighbors xjNix_j \in \mathcal{N}_i, where ρi\rho_i adapts to local density, and σi\sigma_i is chosen to calibrate the neighborhood.
  2. Symmetrization: To reconcile asymmetries, probabilities are combined via pij=pji+pijpjipijp_{ij} = p_{j|i} + p_{i|j} - p_{j|i}p_{i|j}.
  3. Low-Dimensional Similarity: In the embedding, UMAP defines qij=(1+ayiyj22b)1q_{ij} = (1 + a \|y_i - y_j\|_2^{2b})^{-1} with yiRmy_i \in \mathbb{R}^m and a,ba, b controlling repulsion.
  4. Loss Function: UMAP minimizes a fuzzy cross-entropy:

CUMAP=ij[pijlogqij+(1pij)log(1qij)]C_{\mathrm{UMAP}} = -\sum_{i \neq j} \Big[p_{ij} \log q_{ij} + (1 - p_{ij}) \log(1 - q_{ij})\Big]

This decomposes into attractive and repulsive terms, with efficient negative sampling for scalability.

  1. Gradient Optimization: Embeddings yiy_i are found via stochastic gradient descent on this cost, enabling both batch and online/mini-batch learning (Ghojogh et al., 2021).

Parametric UMAP extends these principles by learning a neural network fθ(x)f_\theta(x) to map activations xx to embedding zz such that CUMAPC_{\mathrm{UMAP}} is minimized with respect to θ\theta (Sainburg et al., 2020).

2. UMAP Projections of Intermediate Activations: Purpose and Methods

Intermediate activations—feature vectors extracted from internal layers of a trained or untrained neural network—capture semantic, hierarchical, and task-specific information. UMAP projections of such activations serve several technical purposes:

  • Visualization and Interpretation: Mapping complex activations into two or three dimensions, facilitating the identification of clusters, class separations, and organization of latent variables (Marinescu et al., 13 Oct 2025, Gorriz et al., 3 Sep 2025, Yan et al., 2023).
  • Structural Analysis: Probing how well the model has separated different classes, disentangled features, or formed robust representations (e.g., observing collapse or recovery at different network depths (Marinescu et al., 13 Oct 2025)).
  • Detection of Confounding or Shortcut Features: Non-maximal and mid-logit activations can reveal subtle or spurious correlations not visible from maximal neuron activation analysis (O'Mahony et al., 15 Nov 2024).

Projection strategies encompass:

  • Direct Embedding: Apply UMAP to intermediate layer activations and interpret the resulting clusters and manifold arrangement.
  • Multi-layer Analysis: Apply UMAP at various depths of a network (early, middle, late) to monitor the evolution of learned representations.
  • Custom Metrics: Use cosine similarity or domain-relevant distance measures in constructing the UMAP input graph.

3. Advanced Variants and Practical Workflow for Projections

UMAP's extensions and recent variants enhance the projection of intermediate activations in several ways:

  • Parametric UMAP: A neural network learns the projection mapping, facilitating online and out-of-sample embedding for new activations. The parametric network fθf_\theta is trained to minimize CUMAP(θ)C_{\mathrm{UMAP}}(\theta), supporting joint optimization with other network objectives, e.g., as an auxiliary branch for semi-supervised learning or autoencoder regularization (Sainburg et al., 2020).
  • Progressive and Online UMAP: Progressive UMAP allows dynamic incorporation of newly generated activations, updating kk-NN graphs and embedding coordinates without full retraining (Ghojogh et al., 2021).
  • Approximate UMAP (aUMAP): Fitted with a kk-NN model, aUMAP projects new activations by weighting the UMAP embeddings of their nearest neighbors, bypassing iterative optimization. The projected coordinate uu for activation xx is:

u=i=1k(1/di)j1/djuiu = \sum_{i=1}^k \frac{(1/d_i)}{\sum_j 1/d_j} u_i

where uiu_i are UMAP embeddings of xx's nearest neighbors (Wassenaar et al., 5 Apr 2024).

  • Clustering-Based and Two-Phase Methods: CBMAP and UMATO first determine cluster (or "hub") points in the layer's activation space to preserve global structure, then embed remaining points to maintain local relationships. Such staged strategies produce more globally faithful layouts, crucial for understanding relationship among separate classes and stages in the model (Dogan, 27 Apr 2024, Jeon et al., 22 Aug 2025, Jeon et al., 2022).
  • Variability Analysis: GhostUMAP2 quantifies projection stability against UMAP's stochastic effects by introducing "ghost" initializations and measuring (r,d)-stability—the maximum displacement of ghost projections under random perturbations. This quantifies projection reliability, an important consideration in scientific interpretation and downstream decision-making (Jung et al., 23 Jul 2025).

4. Impact in Research and Key Applications

UMAP projections of intermediate activations have advanced understanding and utility in multiple research domains:

  • Interpretability of Deep Models: Projections reveal where semantic or domain knowledge localizes within a network, highlight non-linear and discontinuous internal representations (e.g., age, disease progression, drug clustering), and track collapse/recovery phenomena in large and intermediate LLM layers (Marinescu et al., 13 Oct 2025).
  • Clinical and Biomedical Data Analysis: In neuroimaging, UMAP applied to latent autoencoder activations or high-dimensional MR features exposes groupings linked to pathology (e.g., differentiating progression in Alzheimer’s Disease), and supports further correlation with anatomical regions using statistical methods (Gorriz et al., 3 Sep 2025, Yan et al., 2023).
  • Semi-Supervised and Manifold Regularization: UMAP loss regularizes autoencoder or classifier representations, leveraging unlabeled activations to improve classification performance, encourage more disentangled or semantically faithful latent geometry, and guide data augmentation and Mixup strategies (Sainburg et al., 2020, El-Laham et al., 2023).
  • Analysis of Confounding and Fairness: Projection of mid-logit activations discloses clusters emerging from spurious correlations or confounding features. These projections guide data curation for retraining and de-biasing (O'Mahony et al., 15 Nov 2024).

Several studies elucidate UMAP’s distinctiveness and its relationship to t-SNE, LargeVis, and classical approaches:

Method Normalization Embedding Forces Global Structure
UMAP Unnormalized Fuzzy cross-entropy, local Weak (improved in UMATO, GLoMAP)
t-SNE Normalized KL divergence, pairwise Weaker, more local
LargeVis Unnormalized Similar to UMAP Similar to UMAP
CBMAP/UMATO N/A (cluster) Cluster-driven, staged Strong, explicit
GLoMAP/iGLoMAP N/A Shortest-path global, temp. Global-to-local prog.
  • Normalization: UMAP eschews global normalization, making it more scalable and mini-batch-friendly compared to t-SNE (Draganov et al., 2023).
  • Attraction/Repulsion Ratio: Altering normalization changes the balance of attractive/repulsive forces; UMAP tends toward larger inter-cluster gaps and "fuzzier" global layouts while t-SNE yields more compact local clusters (Draganov et al., 2023).
  • Structure Preservation: Methods such as UMATO, CBMAP, and GLoMAP explicitly prioritize global structure, preventing the exaggerated local compaction characteristic of vanilla UMAP (Jeon et al., 2022, Dogan, 27 Apr 2024, Jeon et al., 22 Aug 2025, Kim et al., 12 Jun 2024).

6. Metrics, Visualization Techniques, and Limitations

  • Silhouette Score: Used on UMAP projections of activations to quantify cluster separability, with higher scores reflecting better separation (e.g., for age groups, drugs, or diseases in LLMs) (Marinescu et al., 13 Oct 2025).
  • Local Anisotropy: Assesses how one-dimensional a manifold is, for example in age encoding by LLMs—Ai=1(λ2/λ1)A_i = 1 - (\lambda_2/\lambda_1), where λ1,2\lambda_{1,2} are local covariance eigenvalues (Marinescu et al., 13 Oct 2025).
  • (r,d)-Stability: Quantifies projection robustness to initial position and negative sampling stochasticity; interactive visualization tools (GhostExplorer) enable nuanced interpretation at point and neighborhood scale (Jung et al., 23 Jul 2025).
  • Computational Considerations: Standard and parametric UMAP can be costly for large-scale or real-time data streams. aUMAP, parametric, and progressive methods address inference speed, scalability, and out-of-sample projection (Wassenaar et al., 5 Apr 2024, Appleby et al., 2021).
  • Explainability and Limitations: UMAP embeds aspects of the high-dimensional structure but may not reliably preserve all properties (e.g., pairwise distances or higher-order topology). Modified LLE and attraction/repulsion frameworks provide theoretical bridges to classical linear approaches, yet formal guarantees (beyond 0-dimensional topology in TopoMap++) are limited (Draganov et al., 2023, Guardieiro et al., 11 Sep 2024). Stochasticity in the optimization process warrants careful interpretation supported by stability metrics (Jung et al., 23 Jul 2025).

7. Emerging Directions and Open Challenges

Active research areas and technical frontiers include:

  • Explainable DR: Relating UMAP outputs to classical DR objectives (e.g., PCA, LLE), and exploring direct solvers for UMAP-like embeddings to remediate difficulties in interpretation and back-mapping (Draganov et al., 2023).
  • Integration with Domain Knowledge: Lens functions enable overlaying domain-specific features (e.g., gene expression, clinical metadata) as global or local masks on the UMAP connectivity graph, flexibly highlighting patterns guided by expert input (Bot et al., 15 May 2024).
  • Staged Global-Local Optimization: Techniques (UMATO, GLoMAP) which progress from stabilizing global structure to refining local details yield more interpretable and robust embeddings for hidden states or activation patterns of deep models (Jeon et al., 22 Aug 2025, Kim et al., 12 Jun 2024).
  • Topological Guarantees: TopoMap++ and similar approaches guarantee persistence of connected components in the projection, facilitating more robust and faithful analysis of activation clusters and transitions (Guardieiro et al., 11 Sep 2024).
  • Streaming and Scalability: Approximate and parametric variants are increasingly applied to high-throughput or streaming scenarios, e.g., for real-time brain-computer interface feedback (Wassenaar et al., 5 Apr 2024).

UMAP projections of intermediate activations, together with their advanced, parametric, and variance-quantified extensions, provide a technical foundation for understanding neural representations in both theoretical and applied high-dimensional settings. Their capacity to uncover latent structure, inform training or fine-tuning strategies, regularize representations, and link learned features to interpretable semantic or domain constructs constitutes a critical toolkit in state-of-the-art deep learning research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to UMAP Projections of Intermediate Activations.