Papers
Topics
Authors
Recent
2000 character limit reached

IEOT: Interaction-Enhanced Optimal Transport

Updated 11 August 2025
  • Interaction-enhanced Optimal Transport (IEOT) is a framework that integrates sample interactions into classical OT to respect semantic relationships and improve cluster alignment.
  • The method incorporates a semantic similarity regularization term and generates robust pseudo-labels and pseudo-centers for enhanced data clustering.
  • IEOT, when coupled with center-aware contrastive learning, achieves significant performance gains, with up to a 7.34% accuracy improvement in few-shot clustering tasks.

Interaction-enhanced Optimal Transport (IEOT) is a class of methods that extend classical optimal transport (OT) by explicitly incorporating interactions—such as semantic or structural relationships—between samples into the OT problem. Originally motivated by the challenge of aligning data structures where simple pairwise transport fails to reflect latent semantics or cluster coherence, IEOT enforces that the OT assignments respect sample-wise similarities, thereby improving both alignment quality and downstream clustering or categorization performance. In the context of few-shot short text clustering, IEOT serves as a foundational component in frameworks such as IOCC (Yin et al., 8 Aug 2025), enabling the generation of robust pseudo-labels that facilitate effective contrastive learning and cluster refinement.

1. Mathematical Structure of Interaction-Enhanced OT

The IEOT module augments the traditional OT formulation by introducing a semantic similarity regularization term directly into the transport objective. Given a batch of unlabeled data with classifier-derived assignment probabilities collected in a matrix P(u(0))RμB×KP^{(u^{(0)})} \in \mathbb{R}^{\mu B \times K}, the IEOT objective is formulated as:

minQ,b Q,Mε1H(Q)+ε2Θ(b)ε3S,QQ\min_{Q, b} \ \langle Q, M \rangle - \varepsilon_1 H(Q) + \varepsilon_2\Theta(b) - \varepsilon_3 \langle S, QQ^\top\rangle

subject to

  • Q1K=aQ \cdot 1_K = a
  • Q1μB=bQ^\top \cdot 1_{\mu B} = b
  • Q0Q \geq 0
  • b1K=1b^\top \cdot 1_K = 1

where:

  • M=log(P(u(0)))M = -\log(P^{(u^{(0)})}) is a cost matrix based on negative log-probabilities,
  • H(Q)=Q,logQ1H(Q) = -\langle Q, \log Q - 1 \rangle is the entropy regularization on QQ,
  • Θ(b)=j=1Kbjlogbj\Theta(b) = -\sum_{j=1}^K b_j \log b_j is the entropy over cluster weights,
  • SS is a cosine similarity matrix among sample assignment vectors,
  • S,QQ\langle S, Q Q^\top \rangle is the semantic regularization term,
  • ε1,ε2,ε3\varepsilon_1, \varepsilon_2, \varepsilon_3 are hyperparameters.

This semantic regularization ensures that assignments for semantically similar samples remain close, effectively propagating similarity information into the transport plan.

2. Semantic Interaction Regularization

The semantic matrix SS encodes pairwise sample similarities: Sij=Pi:(u(0)),Pj:(u(0))Pi:(u(0))2Pj:(u(0))2S_{ij} = \frac{ \langle P^{(u^{(0)})}_{i:}, P^{(u^{(0)})}_{j:} \rangle }{ \| P^{(u^{(0)})}_{i:} \|_2 \| P^{(u^{(0)})}_{j:} \|_2 } This encourages QQ to produce similar assignment distributions for samples with high cosine similarity, instantiating the "interaction enhancement" that is the hallmark of IEOT. The penalty ε3S,QQ- \varepsilon_3 \langle S, Q Q^\top \rangle increases when semantically similar samples are transported to different clusters, and is minimized when assignments for similar pairs are concordant.

3. Pseudo-Label Construction and Pseudo-Center Formation

Following IEOT optimization (typically via an iterative Majorization–Minimization and Lagrange multiplier method), each unlabeled sample ii is assigned a pseudo-label:

y^u(i)=arg maxjQij\hat{y}_u^{(i)} = \operatorname*{arg\,max}_j Q_{ij}

High-confidence pseudo-labeled samples are aggregated to define category-wise "pseudo-centers" that act as proxies for the latent semantic centers in the embedding space. The pseudo-center for cluster kk is:

cˉk=1IkiIkzi,ck=cˉkcˉk2\bar{c}_k = \frac{1}{|I_k|}\sum_{i \in I_k} z_i, \quad c_k = \frac{\bar{c}_k}{\|\bar{c}_k\|_2}

where IkI_k indexes high-confidence members of cluster kk and ziz_i are embedding vectors. The aggregation ensures that, due to improved pseudo-label quality, pseudo-centers concentrate near true semantic centers, even when initial features are noisy.

4. Interaction with Center-aware Contrastive Learning (CACL)

IEOT and CACL operate in synergistic cycles:

  • IEOT yields accurate, semantically informed pseudo-labels by merging global OT with local similarity constraints.
  • CACL uses these pseudo-labels and pseudo-centers to apply contrastive losses, drawing embeddings close to their respective centers and pushing them away from others:

LP=1μBi=1μBlogexp(cos(zu(1),cy^u)/TP)k=1Kexp(cos(zu(1),ck)/TP)+(augmentation term)\mathcal{L}_P = -\frac{1}{\mu B}\sum_{i=1}^{\mu B} \log \frac{ \exp(\cos(z_u^{(1)}, c_{\hat{y}_u})/T_P) }{ \sum_{k=1}^K \exp(\cos(z_u^{(1)}, c_k)/T_P) } + \text{(augmentation term)}

With each iteration, improved centers yield better representation alignment, resulting in superior cluster-to-semantic center correspondence.

5. Clustering Performance and Experimental Outcomes

IEOT–CACL synergy in IOCC demonstrates pronounced improvements in real-world clustering tasks, particularly for short-text data with weak initial representations. On eight benchmark datasets, IOCC achieves up to 7.34% accuracy improvement on Biomedical (an imbalanced, semantically nuanced dataset) and maintains high normalized mutual information and stability across varied data distributions. The IEOT module is observed to:

  • Accelerate convergence by regularizing representations early in the process.
  • Enhance cluster stability, mitigating degeneracy and sensitivity to imbalance.
  • Improve robustness and efficiency, enabling effective clustering with only few-shot annotated samples and pseudo-labels.

6. Significance and Applicability

The IEOT framework as instantiated in IOCC addresses key limitations in traditional OT—specifically, the neglect of intra-sample semantic relationships in forming assignments and cluster centers. By integrating sample-to-sample semantic affinities into the transport, IEOT enables the propagation of fine-grained structure throughout the clustering pipeline. This results in feature distributions that are both compact within clusters and well-separated across clusters, with empirical evidence supporting effectiveness across both balanced and imbalanced, synthetic and real-world datasets.

The evidence, including improved accuracy on challenging text clustering benchmarks and high clustering stability, supports the conclusion that interaction-enhanced optimal transport is a key methodological advance for aligning cluster and semantic centers in few-shot, weak-supervision settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Interaction-enhanced Optimal Transport (IEOT).