Papers
Topics
Authors
Recent
Search
2000 character limit reached

One-Point Contraction (OPC)

Updated 7 April 2026
  • OPC is a family of techniques that contracts structured objects (tensors, domains, feature spaces) to a single point, simplifying analysis.
  • In tensor integrals, OPC avoids inverse Gram determinants by using analytic contraction formulas for efficient and stable computation.
  • In optimization and machine unlearning, OPC contracts domains and feature spaces, leading to improved convergence rates and robust data forgetting.

One-Point-Contraction (OPC) refers to a family of mathematical and algorithmic techniques arising independently in several domains, notably multiloop Feynman integral reduction, affine-invariant optimization, and deep learning unlearning. In each context, the core conceptual motif is the contraction or restriction of a structured object—such as a tensor, optimization domain, or feature distribution—to a single distinguished "point," yielding simplification, invariance, or erasure of information. This entry surveys the principal instantiations of OPC in contemporary research, with detailed formulations, algorithmic implementations, and application benchmarks.

1. OPC in Tensor Integral Reduction

In the context of one-loop NN-point tensor integrals, the One-Point-Contraction (OPC) is a contraction technique that yields compact analytic expressions for contracted tensor integrals, efficiently bypassing prohibitive Gram determinant divisions. Given the standard tensor integral

Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),

external momenta pjp_j are related to "chord" vectors qiq_i by pj=qjqj+1p_j = q_j - q_{j+1}, with qn=0q_n = 0. Contraction of Inμ1μRI_n^{\mu_1 \dots \mu_R} with a single external momentum pjμ1p_{j\,\mu_1} yields a rank-(R1)(R-1) object that, via the OPC formula, is free of inverse Gram determinants of the full nn-point topology.

The OPC mechanism introduces auxiliary vectors Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),0, given by

Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),1

where Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),2 are signed minors of the Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),3-point Gram matrix, and Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),4 its determinant. Cayley–Jacobi identities guarantee that all explicit Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),5 factors cancel upon full contraction. For example, for vector Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),6 or tensor rank Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),7 contractions, the contracted integrals reduce to sums over Kronecker deltas and kinematic invariants Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),8, multiplied by scalar integrals of reduced dimension or rank. This approach is implemented algorithmically via the OLEC package, which precomputes minors and orchestrates Inμ1μR({qi,mi})=dDkkμ1kμRi=1n1[(kqi)2mi2+iϵ],(D=42ϵ),I_n^{\mu_1 \dots \mu_R}(\{q_i, m_i\}) = \int d^D k\, k^{\mu_1} \dots k^{\mu_R} \prod_{i=1}^{n} \frac{1}{[(k - q_i)^2 - m_i^2 + i\epsilon]}, \quad (D = 4-2\epsilon),9 summations with numerical stability guaranteed by the absence of large Gram determinant denominators (Fleischer et al., 2012).

2. OPC in Affine-Invariant Convex Optimization

In convex optimization, the One-Point-Contraction operator serves as a domain-shrinking mechanism. Given a compact convex feasible set pjp_j0 and a reference point pjp_j1, the OPC operator with contraction parameter pjp_j2 defines the contracted domain

pjp_j3

This contraction is used iteratively: at each step pjp_j4, the algorithm forms pjp_j5, gradually pulling the domain toward pjp_j6. Within this contracted domain, a pjp_j7th-degree Taylor approximation model pjp_j8 is minimized inexactly, producing iterates with provable affine-invariant convergence rate pjp_j9. The method generalizes the Frank-Wolfe algorithm (qiq_i0) and admits a trust-region interpretation for qiq_i1, with the subproblem solved as an inexact (contracting) Newton step. All smoothness constants and complexity bounds are invariant under affine transformations, rendering the method robust to domain geometry. Benchmark experiments on softmax minimization over the simplex demonstrate that higher-order OPC achieves substantial reductions in iteration count and wall-clock time compared to first-order methods (Doikov et al., 2020).

3. OPC in Machine Unlearning

In deep learning unlearning, One-Point-Contraction refers to a transformation of the internal feature space to ensure "deep forgetting" of targeted samples. For a model decomposed as qiq_i2 with qiq_i3 the encoder and qiq_i4 the prediction head, OPC enforces that for all qiq_i5 in the forget-set qiq_i6, the feature representations qiq_i7 are qiq_i8-contracted to a ball of radius qiq_i9 about the origin:

pj=qjqj+1p_j = q_j - q_{j+1}0

This restriction, when enforced at the logits level with pj=qjqj+1p_j = q_j - q_{j+1}1 pj=qjqj+1p_j = q_j - q_{j+1}2-Lipschitz, yields high entropy softmax outputs, implying genuine erasure of discriminative information. The Deep Forgetting criterion is formalized by showing that if pj=qjqj+1p_j = q_j - q_{j+1}3, then the entropy pj=qjqj+1p_j = q_j - q_{j+1}4 is bounded below by a function tending to pj=qjqj+1p_j = q_j - q_{j+1}5 as pj=qjqj+1p_j = q_j - q_{j+1}6, where pj=qjqj+1p_j = q_j - q_{j+1}7 is the class count.

OPC is practically approximated by optimizing a joint loss:

pj=qjqj+1p_j = q_j - q_{j+1}8

where pj=qjqj+1p_j = q_j - q_{j+1}9 is the retain set, CE denotes cross-entropy, and qn=0q_n = 00 controls the retention-forgetting tradeoff. Training (typically via SGD) proceeds from a pretrained qn=0q_n = 01 to qn=0q_n = 02, with all layers updated.

Empirical benchmarks confirm that OPC achieves superior robustness to recovery and inversion attacks (e.g., feature-map alignment, gradient-inversion), resisting reconstruction beyond chance levels for class-unlearning in CIFAR-10 and SVHN. Feature similarity (CKA) between pre- and unlearned models on the forget set collapses nearly to zero only under OPC, supporting the claim of "deep feature forgetting" (Jung et al., 10 Jul 2025).

4. Algorithmic Implementations and Pseudocode

4.1 Tensor Integrals

OLEC implements OPC for tensor integrals with the following generic steps:

  • Precompute all necessary signed minors up to order qn=0q_n = 03.
  • Store minors in a hash-table keyed by sorted row/column indices.
  • For each contraction degree qn=0q_n = 04, and each multi-index, sum over all relevant subset minors to compute qn=0q_n = 05.
  • Multiply by the corresponding scalar integral fetched from a library (e.g., LoopTools/OneLoop).
  • Accumulate terms to assemble the contracted tensor.

A representative algorithm in C++-style pseudocode is provided in (Fleischer et al., 2012).

4.2 Convex Optimization

Algorithm OPC for order-qn=0q_n = 06 minimization iteratively selects contraction qn=0q_n = 07, contracts the domain, forms the qn=0q_n = 08th-order Taylor model, and solves the auxiliary subproblem. Special cases recover classical methods: qn=0q_n = 09 coincides with Frank–Wolfe, Inμ1μRI_n^{\mu_1 \dots \mu_R}0 implements a contracting Newton step. Pseudocode is provided in (Doikov et al., 2020).

4.3 Machine Unlearning

The OPC unlearning routine initializes from Inμ1μRI_n^{\mu_1 \dots \mu_R}1 and uses minibatch SGD to minimize the sum of cross-entropy on the retain set and Inμ1μRI_n^{\mu_1 \dots \mu_R}2 norm contraction of logits on the forget set, requiring no architectural modification. This algorithm effectively ensures no linear or nonlinear structure remains for forgotten data in the feature manifold (Jung et al., 10 Jul 2025).

5. Quantitative and Qualitative Evaluation

The respective OPC frameworks are evaluated as follows:

  • Tensor integrals: Compactness and efficiency of contraction formulas, numerical stability, and cross-section calculations demonstrate the practical utility, with no explicit inverse Gram determinants required (Fleischer et al., 2012).
  • Optimization: Experiments minimizing smoothed softmax over the simplex show that OPC-Newton (Inμ1μRI_n^{\mu_1 \dots \mu_R}3) dramatically outperforms first-order Frank–Wolfe (Inμ1μRI_n^{\mu_1 \dots \mu_R}4) in both iteration count and computation time, confirming the theoretical Inμ1μRI_n^{\mu_1 \dots \mu_R}5 rates (Doikov et al., 2020).
  • Unlearning: On standard benchmarks (CIFAR-10, SVHN), OPC uniquely achieves high unlearning accuracy (UA ≈ 100%), minimal attack recovery (UA drops to chance under strong attacks), and feature similarity near zero (CKA), outperforming a suite of 12 baseline unlearning methods (Jung et al., 10 Jul 2025).
Domain Key Metric / Outcome OPC Characteristic
Tensor Integrals Gram determinant avoidance Analytic delta-cancellation
Convex Optimization Affine-invariant rate Inμ1μRI_n^{\mu_1 \dots \mu_R}6 Shrinking contracted domains
Machine Unlearning Deep feature contraction, robust UA Feature erasure, attack resistance

6. Theoretical Significance and Limitations

OPC frameworks in all domains share the property of enforcing structural collapse or contraction—whether of tensor indices, feasible sets, or representation manifolds—enabling analytical tractability, geometric invariance, or information erasure. In optimization, OPC achieves rates matching the best possible for tensor methods, with total complexity described in precise oracle counts. In machine unlearning, OPC achieves deep forgetting as characterized by entropy lower bounds and operator norm arguments.

Limitations noted in the unlearning literature include the computational cost of retraining all model parameters for large architectures, difficulty of extending the method to non-classification or generative settings, and the potentially undesirable complete collapse of within-class variance in some applications. Extensions to attention-based heads, Fisher–information-based approximations, and partial unlearning remain active topics of research (Jung et al., 10 Jul 2025). In optimization, OPC’s trust-region design accommodates domain geometry but may introduce complexity in nonsmooth regimes (Doikov et al., 2020).

7. Connections and Perspectives

The unifying conceptual theme of One-Point-Contraction—enforcing collapse around a point to simplify structure, guarantee invariance, or erase information—has enabled advances in Feynman diagram techniques, optimization, and privacy-enforcing machine learning. Despite disparate mathematical settings, the motif of OPC as analytic contraction or domain reduction persists. In each field, ongoing research seeks to refine, generalize, and apply OPC-inspired methodologies, both theoretically and in applied algorithmics (Fleischer et al., 2012, Doikov et al., 2020, Jung et al., 10 Jul 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to One-Point-Contraction (OPC).