Papers
Topics
Authors
Recent
Search
2000 character limit reached

Proxy Embedding Geometries

Updated 12 February 2026
  • Proxy Embedding Geometries are methods that represent complex entities as weighted combinations of shared proxy elements, enabling compact and interpretable embedding spaces.
  • They enhance data efficiency and generalization by leveraging structured simplex, hierarchical, and probabilistic geometric formulations in various applications.
  • These techniques underpin advances in recommender systems, visual processing, metric learning, 3D synthesis, few-shot classification, and even algebraic geometry.

Proxy embedding geometries are geometric frameworks in which entity representations—whether items, classes, image parts, or nodes—are expressed not as isolated points but as convex, hierarchical, or probabilistically-weighted combinations of a smaller set of shared “proxy” elements. This paradigm enables compact, high-quality, and often more interpretable embedding spaces, improving data efficiency, generalization, and controllability across machine learning domains such as recommender systems, metric learning, representation learning on graphs, neural rendering, and geometric modeling.

1. Foundational Formulations and Simplex-based Proxy Geometries

At the core of the proxy embedding approach is the expression of complex embeddings via a weighted sum (or more generally, a composition) of a fixed dictionary of proxy vectors. The proxy-based item representation (PIR) model for recommender systems exemplifies this construction: given KK proxy embeddings {pk}k=1KRd\{p_k\}_{k=1}^K\subset \mathbb{R}^d, each item ii is represented as ei=k=1Kαikpke_i = \sum_{k=1}^K \alpha_{ik} p_k where αi=softmax(ϕ(fi,ci)+bifreq)ΔK1\alpha_i = \mathrm{softmax}(\phi(f_i, c_i) + b_i^{\mathrm{freq}}) \in \Delta^{K-1} is a convex weight vector parametrized by the item's attributes and context using a small feedforward network ϕ\phi (with an optional bias for frequent items). The entire set of item embeddings thus resides inside the convex hull (simplex) spanned by the proxy vertices (Seol et al., 2023).

Geometric Consequences:

  • All embeddings are constrained to a (K1)(K-1)-dimensional simplex in Rd\mathbb{R}^d.
  • Proxy vectors are well-trained via strong gradient signals from frequent items.
  • Even infrequent/rare entities are embedded within the well-populated simplex, ensuring high embedding quality.
  • The low-rank structure implicitly regularizes the representation, often improving generalization for rare items.

Compared to a full embedding table of size I×d|\mathcal{I}|\times d, where I\mathcal{I} is the item set, the number of parameters is drastically reduced (KIK\ll|\mathcal{I}|), while performance often improves due to better signal sharing and geometry (Seol et al., 2023).

2. Hierarchical and Structured Proxy Embedding Geometries in Visual Domains

In neural representation learning for images and videos, proxy embedding geometries are constructed with explicit spatial, semantic, and hierarchical structuring. ProxyImg and its derivatives represent images as a layered composition of semantic masks, where each mask is bounded by an adaptively-fitted Bézier curve and filled with an internal proxy mesh via triangulation (Chen et al., 2 Feb 2026).

Key Elements:

  • Boundary proxies: Cubic Bézier control points with adaptive subdivision, tightly approximating semantic segment boundaries.
  • Internal proxies: Multi-scale mesh vertices, located by hierarchical region subdivision reflecting local image structure.
  • Texture embedding: Learnable feature codes are attached to proxies and interpolated to arbitrary coordinates for decoding via MLPs, supporting continuous and high-fidelity rendering.
  • Decoupling: Shape is parameterized by the geometry of proxies, appearance by the feature codes, enabling fine-grained, independent editing.

Similar formulations underpin spatio-temporally consistent video representation, where proxy nodes move over time (propagated via optical flow/tracking), and hierarchical layers organize proxies semantically and geometrically (Chen et al., 14 Oct 2025).

3. Proxy Embeddings for Probabilistic and Metric Learning

Proxy-based geometries extend beyond simplex constructions to probabilistic spaces. In deep metric learning, non-isotropic proxy-based approaches model samples and classes as distributions—e.g., images as von Mises–Fisher (vMF) distributions on the hypersphere and class proxies as non-isotropic vMFs (with diagonal concentration/covariances) (Kirchhof et al., 2022).

Technical Highlights:

  • Sample embedding zz is parameterized by norm z\|z\| (encoding per-sample certainty) and direction μz\mu_z (semantic content).
  • Class proxy is modeled as a distribution with direction μp\mu_p and diagonal KpK_p, allowing class anisotropy and intra-class variance modeling.
  • Loss functions and geometric distances are defined between distributions (e.g., Bhattacharyya, KL, expected-likelihood), not just points.
  • This construction enables uncertainty-aware training dynamics (confidence-weighted updates), models class substructures, and improves generalization in retrieval tasks.

4. Proxy-based Geometries in Graph and Network Embedding

In the context of attributed graphs and networks, proxy geometries often take the form of distance or similarity proxies derived from pairwise relations. GAGE constructs double-centered squared-distance matrices for both the adjacency and attribute spaces, serving as proxies for geometric distances. These are then jointly approximated via low-rank tensor factorization (using CP decomposition) to yield unique, permutation-invariant node embeddings that preserve both topological and attribute geometries (Kanatsoulis et al., 2020).

Model-free network embedding methods such as MDS also implicitly define proxy geometries by minimizing the total stress between network-theoretic (graph geodesic) distances and Euclidean embedding distances. The embedding coordinates can be interpreted as positions in a hidden geometric space, with radial and angular structure corresponding to centrality and communities (Zhang et al., 2020).

5. Proxy Embedding Geometries for Controllable 3D and Animation Representations

Recent visual synthesis frameworks use proxy embedding to decouple geometric control from appearance synthesis in 3D-aware settings. In 3DProxyImg, sparse 3D proxy vertices (obtained from fused, registered point clouds or meshes) carry per-vertex feature codes, and pixel colors are reconstructed via barycentric interpolation and neural decoding (Zhu et al., 17 Dec 2025). Animation and deformation are accomplished by manipulating the proxy mesh—via rigging, linear blend skinning, or position-based dynamics—while the appearance remains fixed to the proxy features. Score distillation sampling (SDS) is used for multi-view consistency during training.

Distinctive properties include:

  • Direct, low-rank geometric control over 3D structure and animation.
  • Separation of mesh-based geometry parameters and high-frequency textural appearance channels.
  • Robustness to viewpoint changes, efficient parameterization, and strong identity preservation compared to traditional pipelines.

6. Proxy Embeddings in Meta-learning and Few-shot Learning

In few-shot classification (ProxyNet), proxies represent class centers on a per-episode basis. The embedding function maps samples to feature spaces, attention networks compute soft-weighted class proxies from support sets, and learned similarity metrics (via relation networks) drive classifier decisions. Cross-entropy loss enforces intra-class compactness (pulling queries to class proxies) and inter-class separation (pushing between proxies), thus sculpting a geometry where proxies function as dynamic centroids for each episode (Xiao et al., 2020).

7. Proxy Geometries in Mathematical Physics and Algebraic Geometry

In high-energy physics, "proxy embedding geometries" can refer to the embedding of complicated geometric objects (Calabi–Yau hypersurfaces) associated with Feynman integrals into weighted projective spaces. Here, the geometry and embedding structure is explicitly determined by the algebraic constraints of the underlying physical integral, with geometric invariants (e.g., Hodge numbers, intersection rings) and mirror symmetry providing the structural context (Bourjaily et al., 2019).

These embeddings serve as "proxy" structures for the computation of periods, Picard–Fuchs equations, and mirror symmetry relations in enumerative geometry.


Proxy embedding geometries provide a unified abstraction spanning neural representations, probabilistic modeling, geometric processing, and mathematical embedding theory. The central unifying principle is the decomposition or parameterization of complex objects—across discrete, continuous, or hierarchical domains—through a small number of shared, interpretable, and optimizable proxies, linked through convexity, hierarchical interpolation, probabilistic weighting, or algebraic embedding constructs. This paradigm underwires improvements in data efficiency, interpretability, controllability, parameter compactness, and, crucially, generalization across sparse, compositional, and underobserved data regimes (Seol et al., 2023, Chen et al., 2 Feb 2026, Chen et al., 14 Oct 2025, Kirchhof et al., 2022, Zhu et al., 17 Dec 2025, Xiao et al., 2020, Kanatsoulis et al., 2020, Zhang et al., 2020, Bourjaily et al., 2019).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Proxy Embedding Geometries.