Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast Supervised Hashing with Decision Trees for High-Dimensional Data (1404.1561v2)

Published 6 Apr 2014 in cs.CV and cs.LG

Abstract: Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the Hamming space. Non-linear hash functions have demonstrated the advantage over linear ones due to their powerful generalization capability. In the literature, kernel functions are typically used to achieve non-linearity in hashing, which achieve encouraging retrieval performance at the price of slow evaluation and training time. Here we propose to use boosted decision trees for achieving non-linearity in hashing, which are fast to train and evaluate, hence more suitable for hashing with high dimensional data. In our approach, we first propose sub-modular formulations for the hashing binary code inference problem and an efficient GraphCut based block search method for solving large-scale inference. Then we learn hash functions by training boosted decision trees to fit the binary codes. Experiments demonstrate that our proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time. Especially for high-dimensional data, our method is orders of magnitude faster than many methods in terms of training time.

Fast Supervised Hashing with Decision Trees for High-Dimensional Data

The paper "Fast Supervised Hashing with Decision Trees for High-Dimensional Data" by Guosheng Lin et al. presents an advanced approach in the domain of supervised hashing, focusing on efficient mapping of high-dimensional data into compact binary codes. This work addresses the challenges of speed and retrieval precision associated with traditional non-linear hashing methods, particularly those leveraging kernel functions.

Summary and Methodology

At the core, this paper emphasizes the utility of boosted decision trees for hashing functions, rather than the conventional use of kernel functions which, although effective, suffer from high computational cost in both training and evaluation. The authors propose a two-step learning framework:

  1. Binary Code Inference: This involves a novel approach where sub-modular formulations for the hashing problem are utilized, followed by a GraphCut-based block search method for large-scale inference. This step is crucial for efficiently handling large datasets by optimizing binary codes.
  2. Learning Hash Functions: Here, decision trees are trained to fit the inferred binary codes. The boosted tree ensembles offer non-linearities essential for capturing the complexities in the data without the associated computational burden of kernel methods.

Performance and Comparisons

This method outperforms several state-of-the-art alternatives, especially on retrieval precision and training time. Relative to kernel-based supervised hashing (KSH), the decision tree-based approach shows significant improvement. For instance, on the CIFAR10 dataset, the paper demonstrates a precision improvement from 0.453 to 0.763 when comparing KSH to this method. Moreover, FastHash accelerates training by orders of magnitude, making it feasible to apply to high-dimensional datasets with tens of thousands of features.

Implications and Future Work

The decision tree-based hashing method proposed is not only computationally efficient but scalable, positioned to considerably impact applications such as image retrieval and large-scale object detection. By successfully addressing both the speed and capacity challenges, this approach sets a new standard for real-world application feasibility.

In terms of future developments, this framework could be further enhanced by exploring other tree structures or boosting strategies to optimize hashing performance. Potential exploration into hybrid models that incorporate aspects of both unconditional tree structures and kernel embeddings might yield further advancements.

Conclusion

The research provides a crucial step forward in supervised hashing, particularly benefiting scenarios where large-scale, high-dimensional data needs efficient processing without sacrificing accuracy. The utilization of decision trees for this domain showcases a practical balance between complexity and performance, paving the way for subsequent innovations in efficient data retrieval systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Guosheng Lin (157 papers)
  2. Chunhua Shen (404 papers)
  3. Qinfeng Shi (42 papers)
  4. Anton van den Hengel (188 papers)
  5. David Suter (33 papers)
Citations (413)