Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A New Benchmark and Approach for Fine-grained Cross-media Retrieval (1907.04476v2)

Published 10 Jul 2019 in cs.IR, cs.CV, cs.LG, and cs.MM

Abstract: Cross-media retrieval is to return the results of various media types corresponding to the query of any media type. Existing researches generally focus on coarse-grained cross-media retrieval. When users submit an image of "Slaty-backed Gull" as a query, coarse-grained cross-media retrieval treats it as "Bird", so that users can only get the results of "Bird", which may include other bird species with similar appearance (image and video), descriptions (text) or sounds (audio), such as "Herring Gull". Such coarse-grained cross-media retrieval is not consistent with human lifestyle, where we generally have the fine-grained requirement of returning the exactly relevant results of "Slaty-backed Gull" instead of "Herring Gull". However, few researches focus on fine-grained cross-media retrieval, which is a highly challenging and practical task. Therefore, in this paper, we first construct a new benchmark for fine-grained cross-media retrieval, which consists of 200 fine-grained subcategories of the "Bird", and contains 4 media types, including image, text, video and audio. To the best of our knowledge, it is the first benchmark with 4 media types for fine-grained cross-media retrieval. Then, we propose a uniform deep model, namely FGCrossNet, which simultaneously learns 4 types of media without discriminative treatments. We jointly consider three constraints for better common representation learning: classification constraint ensures the learning of discriminative features, center constraint ensures the compactness characteristic of the features of the same subcategory, and ranking constraint ensures the sparsity characteristic of the features of different subcategories. Extensive experiments verify the usefulness of the new benchmark and the effectiveness of our FGCrossNet. They will be made available at https://github.com/PKU-ICST-MIPL/FGCrossNet_ACMMM2019.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xiangteng He (16 papers)
  2. Yuxin Peng (65 papers)
  3. Liu Xie (2 papers)
Citations (61)

Summary

Fine-Grained Cross-Media Retrieval: Enhancements and Benchmarks

The paper "A New Benchmark and Approach for Fine-grained Cross-media Retrieval" addresses the significant challenge in multimedia retrieval systems where the current paradigms predominantly focus on coarse-grained retrieval. The authors articulate the limitations of existing systems that return generalized results, such as retrieving various types of birds instead of a specific species like the "Slaty-backed Gull." This paper introduces an innovative benchmark and model, FGCrossNet, designed for fine-grained cross-media retrieval across four media types: image, text, video, and audio, marking a substantial enhancement over previous datasets and models.

Contributions to the Field

  1. Benchmark Construction: The paper highlights the deficiencies in current datasets, which are mainly oriented towards coarse-grained categorization with a focus on basic-level categories. To bridge this gap, the authors present a novel benchmark consisting of 200 fine-grained subcategories of birds. It involves diverse media formats sourced from multiple domains, enhancing the robustness of data representation and retrieval tasks. The benchmark is significant as it not only increases the complexity of retrieval through fine granularity but also includes a larger variety of media types than past datasets.
  2. FGCrossNet Model: The FGCrossNet is introduced as a uniform and comprehensive deep learning model capable of processing heterogeneous data types without segregating them into different network lineages or processes. The model employs a modified ResNet50 architecture that has been fine-tuned to handle input size variations inherently present across different media types.
  3. Multi-Constraint Approach:

The paper proposes a novel multi-constraint approach within FGCrossNet that enhances representation learning. The three-fold constraints include: - Classification Constraint: Ensures discriminative characteristic learning within the fine-grained subcategories. - Center Constraint: Establishes feature compactness within the same subcategories. - Ranking Constraint: Enforces sparsity between different subcategories' features, thereby improving distinguishing ability among closely related classes.

Experimental Evaluation

The authors provide a detailed and extensive experimental evaluation demonstrating the effectiveness of FGCrossNet over several state-of-the-art models like MHTN, ACMR, and others. The results across 12 bi-modality tasks and multi-modality tasks showcase FGCrossNet's superior performance by achieving higher MAP scores, evidencing its capability to handle and retrieve data with fine granularity effectively.

Implications and Future Directions

The practical implications of this research are profound for systems that require precise identification and classification across media, especially in domains like biodiversity conservation, where distinguishing between species is not just academic but necessary for ecological management. On a theoretical plane, this work sets a precedent for further research into integrating heterogeneous media types into cohesive retrieval systems, laying groundwork to refine AI models for more nuanced tasks.

Looking ahead, the paper suggests potential avenues for future work, like task extensions beyond retrieval into categorical and reasoning challenges, as well as deepening the knowledge transfer between modalities to improve the overall retrieval accuracy of less represented types like text and audio. Such enhancements may considerably expand the applicability and efficiency of cross-media retrieval systems in dynamic data environments.

This paper not only provides a pivotal dataset but also innovates in building a model that significantly advances the capabilities of fine-grained cross-media retrieval, offering valuable insights and tools for researchers and practitioners in the field.