Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting Local Features from Deep Networks for Image Retrieval (1504.05133v2)

Published 20 Apr 2015 in cs.CV

Abstract: Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.

Citations (408)

Summary

  • The paper demonstrates that intermediate CNN features significantly enhance instance-level image retrieval performance.
  • It uses VLAD encoding with OxfordNet and GoogLeNet, achieving superior results on Holidays, Oxford, and Paris datasets.
  • The study highlights how higher input resolutions improve local feature extraction, optimizing retrieval without retraining networks.

Exploiting Local Features from Deep Networks for Image Retrieval

The research paper titled "Exploiting Local Features from Deep Networks for Image Retrieval" proposes a novel approach to enhance instance-level image retrieval by leveraging the convolutional features extracted from various layers of deep convolutional neural networks (CNNs). The authors, Joe Yue-Hei Ng, Fan Yang, and Larry S. Davis, aim to address and refine the application of CNNs, traditionally successful in image classification, to the domain of image retrieval.

Key Insights and Methodology

Traditional image retrieval largely depends on manually crafted features like SIFT descriptors, encoded using Bag-of-Words (BoW), Vector of Locally Aggregated Descriptors (VLAD), or Fisher Vectors. Recent advancements with CNNs have demonstrated promising results for classification tasks, yet applying these models to image retrieval needs specific consideration.

The paper investigates the effectiveness of features from varied depths of CNN layers—debunking the prior assumption that the last or penultimate fully connected layers yield the best retrieval performance as they do for classification tasks. Instead, this paper finds that an optimal performance for instance-level image retrieval is often achieved using features from intermediary layers of CNNs. This insight especially holds for instance-specific retrieval tasks where fine granularity and local patterns are more critical than the broader semantic concepts captured at deeper network layers.

Two deep network architectures, OxfordNet and GoogLeNet, pre-trained on the ImageNet database, serve as the experimental foundation to extract features at different layers and scales of input images. The extracted features from these layers are then encoded using the VLAD technique, allowing the transformation of local convolutional responses into a singular representative vector per image, aiding efficient retrieval.

Findings

Extensive experimentation consistently indicates that features extracted from intermediate layers, in particular, outperform those from the last layers. The numerical results underline that this methodology delivers competitive retrieval performance relative to state-of-the-art approaches. Specifically, using compressed 128-D VLAD descriptors, their method surpasses other VLAD and CNN-based approaches across two of the three tested datasets, notably the Holidays, Oxford, and Paris datasets.

An additional intriguing finding is the impact of image input scales. By resizing input images to higher resolutions, even higher layer filters effectively capture local image characteristics, thus improving features for image retrieval. Notably, this paper’s analysis underscores the necessity of flexibly applying deep networks trained for classification to meet the intricacies of instance-level retrieval.

Implications and Future Directions

This research presents crucial implications for the field of image retrieval, highlighting the importance of leveraging intermediate CNN layers rather than relying solely on later layers designed for classification. It opens avenues for further research on how scales and various encoding schemes like BoW or Fisher Vectors could be applied following feature extraction to enhance the retrieval task.

For practical applications and real-world systems, recognizing which network layers and scales are optimal can guide the design of effective instance-level retrieval pipelines without retraining large networks, thus preserving computational resources. Future developments should explore how to dynamically select layer depth and input scales to optimize image retrieval performance further, possibly incorporating adaptive systems that choose these parameters based on retrieval context or content.

The paper substantially contributes to bridging the gap between the conceptual strengths of CNNs and practical retrieval efficiency, urging deeper exploration into the finer details preserved within the convolutions. This insight might serve as a foundational stepping stone for future advancements in both the fields of computer vision and machine learning.