Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 85 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 16 tok/s Pro
GPT-5 High 10 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Re-identification = Retrieval + Verification: Back to Essence and Forward with a New Metric (2011.11506v1)

Published 23 Nov 2020 in cs.CV

Abstract: Re-identification (re-ID) is currently investigated as a closed-world image retrieval task, and evaluated by retrieval based metrics. The algorithms return ranking lists to users, but cannot tell which images are the true target. In essence, current re-ID overemphasizes the importance of retrieval but underemphasizes that of verification, \textit{i.e.}, all returned images are considered as the target. On the other hand, re-ID should also include the scenario that the query identity does not appear in the gallery. To this end, we go back to the essence of re-ID, \textit{i.e.}, a combination of retrieval and verification in an open-set setting, and put forward a new metric, namely, Genuine Open-set re-ID Metric (GOM). GOM explicitly balances the effect of performing retrieval and verification into a single unified metric. It can also be decomposed into a family of sub-metrics, enabling a clear analysis of re-ID performance. We evaluate the effectiveness of GOM on the re-ID benchmarks, showing its ability to capture important aspects of re-ID performance that have not been taken into account by established metrics so far. Furthermore, we show GOM scores excellent in aligning with human visual evaluation of re-ID performance. Related codes are available at https://github.com/YuanXinCherry/Person-reID-Evaluation

Citations (8)

Summary

  • The paper presents a new unified metric (GOM) that integrates retrieval precision and verification performance for both closed-world and open-set re-identification scenarios.
  • It introduces sub-metrics like mean re-ID Precision (mReP) and mean False Rate (mFR) to offer nuanced analysis compared to traditional evaluation methods.
  • Evaluations on benchmarks such as Market-1501 demonstrate that GOM enhances algorithm selection for practical applications in surveillance and forensic analysis.

Comprehensive Evaluation of Re-Identification Algorithms Using Genuine Open-set Metrics

The paper "Re-identification = Retrieval + Verification: Back to Essence and Forward with a New Metric" presents a critical exploration of re-identification (re-ID) evaluations, introducing a novel metric aimed at addressing current limitations in performance assessment. The authors argue that existing re-ID methodologies primarily function under a closed-world image retrieval framework, thereby overestimating retrieval success while neglecting verification and open-set scenarios. They address these deficiencies by proposing the Genuine Open-set re-ID Metric (GOM), which integrates both retrieval and verification into a unified evaluation framework.

The Core Proposition of GOM

GOM represents a fundamental reconceptualization of re-ID evaluation, synthesizing retrieval and verification into a single cohesive metric. It is structured to assess both closed-world and open-set situations, recognizing scenarios where the query identity may not appear in the gallery. This broad capability is particularly valuable in practical applications, such as surveillance and forensic analysis, where gallery representation is not exhaustive.

Key Technical Innovations

  1. Genuine Open-set re-ID Metric (GOM): This metric provides a comprehensive evaluation by balancing the aspects of retrieval and verification. It is further decomposable into sub-metrics, namely mean re-ID Precision curve (mRePmReP) and mean False Rate curve (mFRmFR), which allow for nuanced performance analysis.
  2. Sub-metrics Formulation: GOM includes sub-metrics like Retrieval Precision (RP) and Verification Precision (VP), combined through a geometric mean to form the principal Re-ID Precision (ReP). This facilitates the evaluation of both retrieval accuracy and the verification task across various decision thresholds.
  3. Open-set Consideration: GOM explicitly evaluates open-set re-ID scenarios by incorporating false rates (FR), offering insights into how algorithms manage situations where the probe's ground truth is absent from the gallery.

Evaluation and Implications

The authors evaluate GOM against prominent re-ID benchmarks such as Market-1501 and DukeMTMC-reID, including person and vehicle datasets. The results demonstrate that GOM effectively captures performance nuances unrecognized by existing metrics like Cumulative Matching Characteristics (CMC) or mean Average Precision (mAP). Supervised and unsupervised models were assessed, mapping their performance through the new metric framework.

Practical and Theoretical Implications

On a practical frontier, GOM enhances the selection process for suitable re-ID algorithms in various applications, especially where open-world assumptions are necessary. Theoretically, it paves the way for developing algorithms that can better balance and optimize both retrieval and verification tasks.

Future Directions

The research encourages a paradigm shift in re-ID evaluations, suggesting further paper on metrics for dynamic gallery systems or privacy-preserving settings. By transcending the limitations of traditional closed-world assumptions, this paper lays the groundwork for re-ID systems that are more adaptable and reflective of real-world conditions.

In closing, the introduction of GOM offers an enriched toolset for researchers and practitioners within the re-ID community, facilitating more accurate, balanced, and comprehensive assessments of algorithmic performance. This promises to propel advancements in intelligent surveillance systems, enhancing capabilities in various real-world implementations.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube