One-Point Contraction (OPC)
- OPC is a family of techniques that contracts structured objects (tensors, domains, feature spaces) to a single point, simplifying analysis.
- In tensor integrals, OPC avoids inverse Gram determinants by using analytic contraction formulas for efficient and stable computation.
- In optimization and machine unlearning, OPC contracts domains and feature spaces, leading to improved convergence rates and robust data forgetting.
One-Point-Contraction (OPC) refers to a family of mathematical and algorithmic techniques arising independently in several domains, notably multiloop Feynman integral reduction, affine-invariant optimization, and deep learning unlearning. In each context, the core conceptual motif is the contraction or restriction of a structured object—such as a tensor, optimization domain, or feature distribution—to a single distinguished "point," yielding simplification, invariance, or erasure of information. This entry surveys the principal instantiations of OPC in contemporary research, with detailed formulations, algorithmic implementations, and application benchmarks.
1. OPC in Tensor Integral Reduction
In the context of one-loop -point tensor integrals, the One-Point-Contraction (OPC) is a contraction technique that yields compact analytic expressions for contracted tensor integrals, efficiently bypassing prohibitive Gram determinant divisions. Given the standard tensor integral
external momenta are related to "chord" vectors by , with . Contraction of with a single external momentum yields a rank- object that, via the OPC formula, is free of inverse Gram determinants of the full -point topology.
The OPC mechanism introduces auxiliary vectors 0, given by
1
where 2 are signed minors of the 3-point Gram matrix, and 4 its determinant. Cayley–Jacobi identities guarantee that all explicit 5 factors cancel upon full contraction. For example, for vector 6 or tensor rank 7 contractions, the contracted integrals reduce to sums over Kronecker deltas and kinematic invariants 8, multiplied by scalar integrals of reduced dimension or rank. This approach is implemented algorithmically via the OLEC package, which precomputes minors and orchestrates 9 summations with numerical stability guaranteed by the absence of large Gram determinant denominators (Fleischer et al., 2012).
2. OPC in Affine-Invariant Convex Optimization
In convex optimization, the One-Point-Contraction operator serves as a domain-shrinking mechanism. Given a compact convex feasible set 0 and a reference point 1, the OPC operator with contraction parameter 2 defines the contracted domain
3
This contraction is used iteratively: at each step 4, the algorithm forms 5, gradually pulling the domain toward 6. Within this contracted domain, a 7th-degree Taylor approximation model 8 is minimized inexactly, producing iterates with provable affine-invariant convergence rate 9. The method generalizes the Frank-Wolfe algorithm (0) and admits a trust-region interpretation for 1, with the subproblem solved as an inexact (contracting) Newton step. All smoothness constants and complexity bounds are invariant under affine transformations, rendering the method robust to domain geometry. Benchmark experiments on softmax minimization over the simplex demonstrate that higher-order OPC achieves substantial reductions in iteration count and wall-clock time compared to first-order methods (Doikov et al., 2020).
3. OPC in Machine Unlearning
In deep learning unlearning, One-Point-Contraction refers to a transformation of the internal feature space to ensure "deep forgetting" of targeted samples. For a model decomposed as 2 with 3 the encoder and 4 the prediction head, OPC enforces that for all 5 in the forget-set 6, the feature representations 7 are 8-contracted to a ball of radius 9 about the origin:
0
This restriction, when enforced at the logits level with 1 2-Lipschitz, yields high entropy softmax outputs, implying genuine erasure of discriminative information. The Deep Forgetting criterion is formalized by showing that if 3, then the entropy 4 is bounded below by a function tending to 5 as 6, where 7 is the class count.
OPC is practically approximated by optimizing a joint loss:
8
where 9 is the retain set, CE denotes cross-entropy, and 0 controls the retention-forgetting tradeoff. Training (typically via SGD) proceeds from a pretrained 1 to 2, with all layers updated.
Empirical benchmarks confirm that OPC achieves superior robustness to recovery and inversion attacks (e.g., feature-map alignment, gradient-inversion), resisting reconstruction beyond chance levels for class-unlearning in CIFAR-10 and SVHN. Feature similarity (CKA) between pre- and unlearned models on the forget set collapses nearly to zero only under OPC, supporting the claim of "deep feature forgetting" (Jung et al., 10 Jul 2025).
4. Algorithmic Implementations and Pseudocode
4.1 Tensor Integrals
OLEC implements OPC for tensor integrals with the following generic steps:
- Precompute all necessary signed minors up to order 3.
- Store minors in a hash-table keyed by sorted row/column indices.
- For each contraction degree 4, and each multi-index, sum over all relevant subset minors to compute 5.
- Multiply by the corresponding scalar integral fetched from a library (e.g., LoopTools/OneLoop).
- Accumulate terms to assemble the contracted tensor.
A representative algorithm in C++-style pseudocode is provided in (Fleischer et al., 2012).
4.2 Convex Optimization
Algorithm OPC for order-6 minimization iteratively selects contraction 7, contracts the domain, forms the 8th-order Taylor model, and solves the auxiliary subproblem. Special cases recover classical methods: 9 coincides with Frank–Wolfe, 0 implements a contracting Newton step. Pseudocode is provided in (Doikov et al., 2020).
4.3 Machine Unlearning
The OPC unlearning routine initializes from 1 and uses minibatch SGD to minimize the sum of cross-entropy on the retain set and 2 norm contraction of logits on the forget set, requiring no architectural modification. This algorithm effectively ensures no linear or nonlinear structure remains for forgotten data in the feature manifold (Jung et al., 10 Jul 2025).
5. Quantitative and Qualitative Evaluation
The respective OPC frameworks are evaluated as follows:
- Tensor integrals: Compactness and efficiency of contraction formulas, numerical stability, and cross-section calculations demonstrate the practical utility, with no explicit inverse Gram determinants required (Fleischer et al., 2012).
- Optimization: Experiments minimizing smoothed softmax over the simplex show that OPC-Newton (3) dramatically outperforms first-order Frank–Wolfe (4) in both iteration count and computation time, confirming the theoretical 5 rates (Doikov et al., 2020).
- Unlearning: On standard benchmarks (CIFAR-10, SVHN), OPC uniquely achieves high unlearning accuracy (UA ≈ 100%), minimal attack recovery (UA drops to chance under strong attacks), and feature similarity near zero (CKA), outperforming a suite of 12 baseline unlearning methods (Jung et al., 10 Jul 2025).
| Domain | Key Metric / Outcome | OPC Characteristic |
|---|---|---|
| Tensor Integrals | Gram determinant avoidance | Analytic delta-cancellation |
| Convex Optimization | Affine-invariant rate 6 | Shrinking contracted domains |
| Machine Unlearning | Deep feature contraction, robust UA | Feature erasure, attack resistance |
6. Theoretical Significance and Limitations
OPC frameworks in all domains share the property of enforcing structural collapse or contraction—whether of tensor indices, feasible sets, or representation manifolds—enabling analytical tractability, geometric invariance, or information erasure. In optimization, OPC achieves rates matching the best possible for tensor methods, with total complexity described in precise oracle counts. In machine unlearning, OPC achieves deep forgetting as characterized by entropy lower bounds and operator norm arguments.
Limitations noted in the unlearning literature include the computational cost of retraining all model parameters for large architectures, difficulty of extending the method to non-classification or generative settings, and the potentially undesirable complete collapse of within-class variance in some applications. Extensions to attention-based heads, Fisher–information-based approximations, and partial unlearning remain active topics of research (Jung et al., 10 Jul 2025). In optimization, OPC’s trust-region design accommodates domain geometry but may introduce complexity in nonsmooth regimes (Doikov et al., 2020).
7. Connections and Perspectives
The unifying conceptual theme of One-Point-Contraction—enforcing collapse around a point to simplify structure, guarantee invariance, or erase information—has enabled advances in Feynman diagram techniques, optimization, and privacy-enforcing machine learning. Despite disparate mathematical settings, the motif of OPC as analytic contraction or domain reduction persists. In each field, ongoing research seeks to refine, generalize, and apply OPC-inspired methodologies, both theoretically and in applied algorithmics (Fleischer et al., 2012, Doikov et al., 2020, Jung et al., 10 Jul 2025).