Papers
Topics
Authors
Recent
2000 character limit reached

Joint Embedding Predictive Architectures (JEPA)

Updated 8 January 2026
  • JEPA is a family of self-supervised representation learning methods that predict latent features from context regions without using decoders or reconstruction objectives.
  • JEPA approaches employ asymmetric transformer-based encoders, momentum-based target estimation, and tailored masking strategies to achieve improved sample efficiency and robustness.
  • JEPA has been successfully adapted to diverse modalities like vision, audio, graphs, and trajectories, demonstrating competitive and transferable performance in real-world tasks.

Joint Embedding Predictive Architectures (JEPA) are a family of self-supervised representation learning methods in which neural networks are trained to predict the latent representation of “target” regions of an input from the latent representation of “context” regions, with prediction occurring in a high-level feature space rather than in the raw input domain. JEPA originated in the vision domain but has been adapted and rigorously studied for audio, graph, trajectory, time-series, polymer molecular graphs, and multi-modal fusion. The approach is distinguished from contrastive and generative paradigms by having no explicit decoders, no reconstruction objectives, and no requirement for negative sampling. Instead, JEPA strategies focus on predictive coding in the latent space, leveraging architectural asymmetry, momentum encoders, and tailored masking strategies to learn semantically meaningful features, often yielding significant gains in sample efficiency, robustness, and downstream transfer (Riou et al., 2024, Tuncay et al., 25 Jun 2025, Fei et al., 2023, Hartman et al., 22 Apr 2025, Chen et al., 2024, Littwin et al., 2024, Skenderi et al., 2023, Vo et al., 9 Mar 2025).

1. Formalism and Computational Workflow

In the canonical form for dense modalities (vision, audio), the JEPA process proceeds as follows. Given an input XX (e.g., an image or mel-spectrogram), partition into NN non-overlapping patches x1,...,xNx_1, ..., x_N. Sample disjoint sets CC (context) and TT (target): CT=C \cap T = \emptyset. The context encoder fθf_\theta (ViT or GNN) computes patch-level latent embeddings Zc=fθ(Xc)Z_c = f_\theta(X_c), while the target encoder fθˉf_{\bar\theta} (momentum copy or EMA) generates Zt=fθˉ(Xt)Z_t = f_{\bar\theta}(X_t) for the target patches (Riou et al., 2024, Tuncay et al., 25 Jun 2025, Fei et al., 2023).

A lightweight predictor network gϕg_\phi receives all ZcZ_c and positional encodings for TT, producing Z^=gϕ(Zc,posT)\hat Z = g_\phi(Z_c, \text{pos}_T). The loss is a patch- or block-level average over a smooth-L1 (Huber) or squared error between predictions and target latents: L(θ,ϕ)=1TjTd(z^j,zj)\mathcal{L}(\theta, \phi) = \frac{1}{|T|} \sum_{j \in T} d(\hat z_{j}, z'_j) with d(u,v)=12uv22d(u,v) = \tfrac{1}{2}\|u - v\|_2^2 if uv1<1\|u-v\|_1 < 1 else uv11/2|u-v|_1 - 1/2. This “energy” is minimized for JEPA training.

Distinct variants modify the backbone (ViT, CNN, GNN), loss, masking, and regularization, but all share the above principle.

2. Masking Strategies and Modality Dependence

JEPA methods critically depend on the design of the masking strategy that selects context and target positions (Riou et al., 2024, Tuncay et al., 25 Jun 2025, Fei et al., 2023). For images, multi-block masking (several contiguous blocks as targets) enforces local spatial continuity and yields pronounced accuracy gains. For audio spectrograms, however, masking must reflect cross-frequency event structure: random unstructured masking significantly outperforms multi-block or time-only (“stripe”) masking, which degrades performance by suppressing cross-frequency cues. In audio JEPA, so-called “latent-domain masking” (masking only in latent space, not input space) also impairs representations, contrary to its positive effect in the image domain.

Choice of temporal duration for pre-training segments is modality-specific: shorter input windows benefit fine temporal discrimination (word, pitch, speech tasks) while longer contexts favor global or coarse semantic classification (music genres, environmental sounds). The best-performing audio JEPA pipelines use only random masking and no additional data augmentation, highlighting the non-transferability of vision heuristics to audio (Riou et al., 2024, Tuncay et al., 25 Jun 2025, Fei et al., 2023).

3. Neural Architectures and Prediction Networks

The backbone for both context and target encoders is typically a Vision Transformer (ViT-Base: 12 layers, 768-dim, 12-head, sinusoidal positional encoding, FlashAttention for throughput), with a smaller ViT predictor (dim=512\text{dim}=512, 8 layers, 16 heads) (Riou et al., 2024, Tuncay et al., 25 Jun 2025). Patchwise processing is exclusively transformer-based, omitting any CNN front-end. Feature concatenation and pooling are task-specific (e.g., for audio, patch embeddings are stacked across frequency, pooled across time, and classified via linear heads).

In graph JEPA, context and target branches are GNN stacks with Transformer encoding of structural features; prediction is performed via a multi-layer MLP, often with hierarchical or hyperbolic latent objective to impose semantic and hierarchical subspace structure (Skenderi et al., 2023, Piccoli et al., 22 Jun 2025). In trajectory JEPA, both context and target branches employ Transformers over pre-embedded cell sequences and augmented with spatio-temporal smoothing, with variable-length and overlap-removed sampling to construct context-target splits at each iteration (Li et al., 2024).

4. Objective Functions, Regularization, and Collapse Avoidance

JEPA’s objective is purely prediction in latent space, eschewing pixel/data reconstruction and any negative sampling. Collapse prevention—where all latent embeddings degenerate to constants—is mitigated in different ways: by momentum EMA target encoder (tracking but never optimizing the context network), by adding auxiliary objectives (e.g. value regression, VICReg regularization) (Mo et al., 2024, Hartman et al., 22 Apr 2025, Yu et al., 12 Sep 2025).

Contrastive-JEPA (C-JEPA) incorporates explicit variance/invariance/covariance penalties (VICReg) to robustly anchor the representation and prevent collapse; theoretical and empirical analyses show the necessity of such auxiliary terms when using EMA-based targets, especially for patch-wise prediction (Mo et al., 2024). No-Unhealthy Collapse theorems demonstrate that with a suitable auxiliary (e.g. reward), the representation becomes bisimulation-separating, preserving only meaningful distinctions (Yu et al., 12 Sep 2025).

Specialized JEPA variants (SparseJEPA) introduce sparse group regularizers to promote interpretability and modularity of the latent space, reducing multiinformation and focusing capacity on semantic factors (Hartman et al., 22 Apr 2025). Curriculum masking—in audio—systematically anneals blockwise to time-frequency masks to capture both local and global patterns (Fei et al., 2023).

5. Modalities, Extensions, and Representative Results

JEPA principles are now instantiated for:

Sample empirical results (audio JEPA (Riou et al., 2024)):

Task JEPA (random mask) Prior SOTA
ESC-50 (env. sound) 89.3% ATST 92.9%, MSM-MAE 88.6%
Speech Commands 94.9% M2D 95.4%
VoxCeleb1 (speaker) 60.8% M2D 73.1%, ATST 72.0%
GTZAN (music genre) 82.1% M2D 83.3%

SparseJEPA, by contrast, raises CIFAR-100 accuracy from 40.0% (standard JEPA) to 45.4%, and improves object counting R2R^2 from 59.13% to 62.33% via structured sparsity.

6. Theoretical Insights and Predictive Bias

Distinct from pixel-reconstruction models (MAE), JEPA objectives prioritize high-influence, predictive features over noisy, high-variance directions (Littwin et al., 2024). Deep linear model analysis shows JEPA’s depth-dependent bias towards features with the highest regression coefficients, which amplifies as network depth increases, and which is fundamentally different from the variance-based preference of MAEs. In dynamical systems, JEPA’s latent predictors implicitly align with Koopman regime invariants, clustering data by underlying dynamics if initialized and regularized appropriately; the predictor’s near-identity bias plays a critical role in selecting this interpretable solution from a broad equivalence class (Ruiz-Morales et al., 12 Nov 2025).

Furthermore, JEPA’s predictive coding is proven to extract “slow features,” preferentially encoding static but easily predicted background signals; this can be a limitation when relevance depends on subject dynamics (Sobal et al., 2022).

7. Limitations, Modal Parameter Interactions, and Future Directions

The generality of JEPA belies strong modality-specific biases: masking strategies, duration, encoder design, auxiliary regularization, and predictor structure interact nontrivially with data statistics (Riou et al., 2024, Wan et al., 1 Oct 2025, Vo et al., 9 Mar 2025, Fei et al., 2023). Optimal choices for images often degrade audio or time-series performance; e.g., multi-block masking that boosts ImageNet accuracy is suboptimal for AudioSet and speech classification.

JEPA models are vulnerable to representational collapse unless carefully regularized, particularly in high-data or uniformly masked settings. Explicit injection of auxiliary tasks (reward, value, contrastive loss), structured sparsity, and spatial conditioning are effective but modality-sensitive remedies.

Prospective research directions include object-centric representations via sparse grouping, robust regime-disentanglement via Koopman-invariant predictors, and unified encoding pipelines for complex or multimodal sensory input. Theoretical understanding of latent prediction biases in nonlinear networks remains incomplete and is a target for ongoing work (Ruiz-Morales et al., 12 Nov 2025, Littwin et al., 2024, Hartman et al., 22 Apr 2025, Yu et al., 12 Sep 2025).


References:

(Riou et al., 2024, Tuncay et al., 25 Jun 2025, Fei et al., 2023, Hartman et al., 22 Apr 2025, Chen et al., 2024, Littwin et al., 2024, Skenderi et al., 2023, Vo et al., 9 Mar 2025, Mo et al., 2024, Wan et al., 1 Oct 2025, Littwin et al., 2024, Ennadir et al., 29 Sep 2025, Yu et al., 12 Sep 2025, He et al., 21 Nov 2025, Ruiz-Morales et al., 12 Nov 2025, Piccoli et al., 22 Jun 2025, Sobal et al., 2022, Li et al., 2024)

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Joint Embedding Predictive Architectures (JEPA).