Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Expansion IoU for Robust Multi-Object Tracking

Updated 7 February 2026
  • Deep-EIoU is a hybrid metric that combines expansion IoU with deep visual re-identification to improve object matching under geometric distortions.
  • It tackles limitations of traditional IoU by accommodating spatial misalignments and capturing appearance features for reliable multi-object tracking.
  • Empirical evaluations in GTATrack show significant gains in HOTA and reduced identity switches, underscoring its effectiveness in dynamic sports analytics.

Deep Expansion IoU (Deep-EIoU) is a hybrid online association metric designed for robust multi-object tracking (MOT) under settings where standard motion models and pure geometric metrics fail, notably in sports analytics with strong geometric distortions and highly dynamic motion. Deep-EIoU was first introduced as the core local tracker in the GTATrack system, the winning solution to SoccerTrack 2025, and is specifically formulated to combine the geometric tolerance of Expansion IoU (EIoU) with the discriminative power of deep visual re-identification, yielding a highly motion-agnostic association cost for boundary-matching in challenging video streams (Jian et al., 31 Jan 2026).

1. Motivation and Context

Conventional online trackers frequently use Intersection-over-Union (IoU) to estimate the geometric similarity between predicted and detected bounding boxes frame-to-frame. For sports MOT scenarios characterized by static fisheye cameras, small and distant targets, and rapid, nonlinear motion, the strictness of standard IoU can cause fragile associations. Even minor bounding box misalignments result in drastic IoU drops, undermining the reliability of association especially for small targets where a one-pixel shift significantly impairs geometry-based confidence.

Deep-EIoU is motivated by two observations:

1. Raw IoU is too brittle under geometric distortion and fast erratic motion; targets are commonly missed or fragmented in slow-motion Kalman/IoU trackers;

  1. Deep appearance embeddings can capture identity across moderate visual changes, but may fail when appearance cues are temporally inconsistent due to partial occlusion or rapid pose changes.

A combination of an "expansion-tolerant" spatial metric and a deep feature comparator yields a robust, motion-agnostic matching process suited to the unique pathologies of broadcast sports videos (Jian et al., 31 Jan 2026).

2. Mathematical Formulation of Deep-EIoU

2.1. Expansion IoU (EIoU) Definition

Given two bounding boxes bb and bb', Expansion IoU (EIoU) considers several expanded versions of bb:

  • {b(k)=Expand(b,sk)  skS}\{b^{(k)} = \text{Expand}(b, s_k)\ |\ s_k \in S\}, where SS is a discrete set of scale factors, e.g. {1.00,1.05,1.10}\{1.00,1.05,1.10\}.
  • Compute multiple IoU(k)=IoU(b(k),b)\operatorname{IoU}^{(k)} = \operatorname{IoU}(b^{(k)}, b').
  • Define EIoU(b,b)=maxkIoU(k)\operatorname{EIoU}(b, b') = \max_k \operatorname{IoU}^{(k)}.

This operation allows spatial tolerance to small geometric shifts between bounding boxes.

2.2. Complete Deep-EIoU Cost

The final association cost CijC_{ij} between detection ii and candidate tracklet jj is defined as:

Cspatial(i,j)=1EIoU(bi,bτj)C_{\mathrm{spatial}}(i, j) = 1 - \operatorname{EIoU}(b_i, b_{\tau_j})

dapp(i,j)=1fifτjd_{\mathrm{app}}(i, j) = 1 - f_i \cdot f_{\tau_j}

Cij=λCspatial(i,j)+(1λ)dapp(i,j)C_{ij} = \lambda\,C_{\mathrm{spatial}}(i,j) + (1-\lambda)\,d_{\mathrm{app}}(i,j)

where fif_i and fτjf_{\tau_j} are L2L^2-normalized deep visual embeddings (typically from OSNet, dimension 512), and λ\lambda is a weighting scalar, defaulting to 0.5.

For assignment, the cost matrix CC feeds the Hungarian (Kuhn-Munkres) algorithm to produce frame-wise optimal associations. Candidate pairs with CijC_{ij} above a proximity threshold θthresh\theta_{\text{thresh}} (typically θ[0.8,0.9]\theta \in [0.8,0.9]) are rejected (Jian et al., 31 Jan 2026).

3. Integration and Workflow within MOT Systems

Deep-EIoU operates as the local association module in multi-stage MOT pipelines. In the GTATrack system, the complete workflow is:

  1. Detection/Feature Extraction: YOLOv11x (pseudo-label fine-tuned) detects objects, OSNet extracts appearance features per box.
  2. Framewise Matching: For each new frame, detections are matched to existing tracklets using Deep-EIoU costs.
  3. Tracklet Update/Spawn: Matched pairs update tracklets; unmatched detections start new ones; lost tracklets are discarded after NN frames without a successful match.
  4. Global Refinement: An offline Global Tracklet Association (GTA-Link) step refines initial trajectories by merging and splitting tracklets based on appearance/temporal clustering.

The two-stage architecture decouples local robustness (Deep-EIoU) from global identity enforcement (GTA-Link), improving both association granularity and long-term consistency, especially under severe distortion or occlusion (Jian et al., 31 Jan 2026, Sun et al., 2024).

4. Hyperparameters and Practical Considerations

Key hyperparameters for Deep-EIoU in practice include:

  • Expansion Scales (sks_k): Typical values are {1.00,1.05,1.10}\{1.00, 1.05, 1.10\} to admit spatial jitter or mild distortion.
  • Appearance Weight (λ\lambda): Balance of spatial and feature-based association; λ=0.5\lambda=0.5 yields best overall tracking accuracy measured via HOTA.
  • Proximity Threshold (θthresh\theta_{\text{thresh}}): Optimal range is 0.8θ0.90.8\leq\theta\leq0.9 to maximize correct matches for fast-moving objects while filtering unrelated candidates.
  • ReID Extractor: OSNet, L2-normalized, embedding dimension 512.
  • Association Solver: Framewise Hungarian algorithm.

A sweep over proximity thresholds and expansion factors reveals trade-offs between recall on challenging targets and increased risk of false matches; the preferred configuration is empirically determined based on HOTA/IDSW analysis (Jian et al., 31 Jan 2026).

5. Comparative Performance and Ablation

In the SoccerTrack 2025 challenge, replacing ByteTrack’s traditional motion/geometric association with Deep-EIoU yielded a substantial HOTA gain (from 0.42 to 0.54) and decreased identity switches (from 630 to 325.5). Subsequent downstream refinement with GTA-Link and detector pseudo-labeling further increased HOTA to 0.60, with a competitive identity switch count and low false-positive rate.

Table: Key Results from GTATrack Leaderboard and Ablation (Jian et al., 31 Jan 2026)

Configuration HOTA ↑ IDSW ↓ FN ↓ FP ↓
ByteTrack (baseline) 0.42 630 10115.0 961.0
Deep-EIoU only 0.54 325.5 5440.5 980.0
Deep-EIoU+GTA-Link+pseudo-labels 0.60 331.5 5454.5 982.0

These results demonstrate that Deep-EIoU is robust to small-target variations and global geometric distortion, outperforming conventional, motion-model-based association strategies in all studied sports MOT regimes.

6. Limitations and Opportunities

While Deep-EIoU greatly ameliorates strict geometric fragility and is resilient to erratic motion, it is limited by the quality of the underlying detection and the discriminability of visual embeddings. Extreme occlusions or visually ambiguous targets that mislead the feature extractor can still cause association errors. The method is also inherently local, requiring global post-processing (such as GTA-Link or other clustering-based refinement) to suppress long-range identity switches and tracklet fragmentation (Sun et al., 2024).

A plausible implication is that future directions may include end-to-end learned weighting of spatial vs. appearance metrics, dynamic expansion scheduling, and joint optimization within the global refinement stage. Integration with multi-camera or strong geometric priors may also further strengthen Deep-EIoU’s robustness.

7. Impact and Applications

Deep-EIoU has established itself as a state-of-the-art core in fisheye-based and small-target sports MOT, validated by its adoption as the principal tracker in GTATrack, the highest-performing system in SoccerTrack 2025. Its combination with pseudo-label-boosted detectors and global association modules addresses a wide spectrum of tracking challenges, from severe geometric distortion to frequent occlusions and scale variation. The open-source codebase and detailed ablations in (Jian et al., 31 Jan 2026) provide a practical reference for the integration and evaluation of Deep-EIoU in advanced MOT systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Deep Expansion IoU (Deep-EIoU).