Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrastive Learning for Multi-Object Tracking with Transformers (2311.08043v1)

Published 14 Nov 2023 in cs.CV

Abstract: The DEtection TRansformer (DETR) opened new possibilities for object detection by modeling it as a translation task: converting image features into object-level representations. Previous works typically add expensive modules to DETR to perform Multi-Object Tracking (MOT), resulting in more complicated architectures. We instead show how DETR can be turned into a MOT model by employing an instance-level contrastive loss, a revised sampling strategy and a lightweight assignment method. Our training scheme learns object appearances while preserving detection capabilities and with little overhead. Its performance surpasses the previous state-of-the-art by +2.6 mMOTA on the challenging BDD100K dataset and is comparable to existing transformer-based methods on the MOT17 dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pierre-François De Plaen (3 papers)
  2. Nicola Marinello (3 papers)
  3. Marc Proesmans (14 papers)
  4. Tinne Tuytelaars (150 papers)
  5. Luc Van Gool (569 papers)
Citations (1)
Youtube Logo Streamline Icon: https://streamlinehq.com