Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse-Inductive Generative Adversarial Hashing for Nearest Neighbor Search (2306.06928v1)

Published 12 Jun 2023 in cs.CV and cs.LG

Abstract: Unsupervised hashing has received extensive research focus on the past decade, which typically aims at preserving a predefined metric (i.e. Euclidean metric) in the Hamming space. To this end, the encoding functions of the existing hashing are typically quasi-isometric, which devote to reducing the quantization loss from the target metric space to the discrete Hamming space. However, it is indeed problematic to directly minimize such error, since such mentioned two metric spaces are heterogeneous, and the quasi-isometric mapping is non-linear. The former leads to inconsistent feature distributions, while the latter leads to problematic optimization issues. In this paper, we propose a novel unsupervised hashing method, termed Sparsity-Induced Generative Adversarial Hashing (SiGAH), to encode large-scale high-dimensional features into binary codes, which well solves the two problems through a generative adversarial training framework. Instead of minimizing the quantization loss, our key innovation lies in enforcing the learned Hamming space to have similar data distribution to the target metric space via a generative model. In particular, we formulate a ReLU-based neural network as a generator to output binary codes and an MSE-loss based auto-encoder network as a discriminator, upon which a generative adversarial learning is carried out to train hash functions. Furthermore, to generate the synthetic features from the hash codes, a compressed sensing procedure is introduced into the generative model, which enforces the reconstruction boundary of binary codes to be consistent with that of original features. Finally, such generative adversarial framework can be trained via the Adam optimizer. Experimental results on four benchmarks, i.e., Tiny100K, GIST1M, Deep1M, and MNIST, have shown that the proposed SiGAH has superior performance over the state-of-the-art approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. Coates Adam and Ng Andrew Y. The importance of encoding versus training with sparse coding and vector quantization. In Proceedings of the ICML, 2011.
  2. Tree quantization for large-scale similarity search and classification. Proceeding of the CVPR, 2015.
  3. Beyond “project and sign” for cosine estimation with binary codes. In Proceeding of the ICASSP, 2014.
  4. Compressed sensing using generative models. In Proceedings of the ICML, 2017.
  5. Min-wise independent permutations. J. Comput. Syst. Sci., 2000.
  6. Emmanuel J Candes. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 2008.
  7. Hashing with binary autoencoders. In Proceedings of the CVPR, 2015.
  8. Stochastic generative hashing. In Proceedings of the ICML, 2017.
  9. An elementary proof of the johnson-lindenstrauss lemma. International Computer Science Institute, 1999.
  10. Locality-sensitive hashing scheme based on p-stable distributions. In Symposium on Computational Geometry, 2004.
  11. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the ICML, 2014.
  12. David L Donoho. Compressed sensing. IEEE Transactions on Information Theory, 2006.
  13. Learning deep binary descriptor with multi- quantization. In Proceedings of the CVPR, 2017.
  14. Learning deep binary descriptor with multi-quantization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
  15. Unsupervised deep generative adversarial hashing network. In Proceedings of the CVPR, 2018.
  16. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013.
  17. Deep Learning. MIT Press, 2016.
  18. Generative adversarial nets. In Proceeding of the NIPS, 2014.
  19. Fast supervised discrete hashing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
  20. K-means hashing: An affinity-preserving quantization method for learning binary compact codes. In Proceedings of the CVPR, 2013.
  21. Deep residual learning for image recognition. In Proceedings of the CVPR, 2016.
  22. Spherical hashing: Binary code embedding with hyperspheres. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015.
  23. Learning discrete representations via information maximizing self-augmented training. In Proceedings of the ICML, 2017.
  24. Convolutional networks with dense connectivity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
  25. Laurent Jacques. A quantized johnson–lindenstrauss lemma: The finding of buffon’s needle. IEEE Transactions on Information Theory, 61(9):5012–5027, 2015.
  26. Product quantization for nearest neighbor search. IEEE transactions on Pattern Analysis and Machine Intelligence, 2011.
  27. Scalable graph hashing with feature transformation. In Proceedings of the IJCAI, 2015.
  28. Very deep convolutional networks for large-scale image recognition. In Proceedings of the ICLR, 2015.
  29. Adam: A method for stochastic optimization. In Proceeding of the ICLR, 2014.
  30. Imagenet classification with deep convolutional neural networks. In Proceeding of the NIPS, 2012.
  31. Learning to hash with binary reconstructive embeddings. In Proceedings of the NIPS, 2009.
  32. Handwritten digit recognition with a back-propagation network. In Proceedings of the NIPS, 1990.
  33. A tutorial on energy-based learning. Predicting Structured Data, 2006.
  34. Learning compact binary descriptors with unsupervised deep neural networks. In Proceedings of the CVPR, 2016.
  35. Deep hashing for compact binary codes learning. In Proceedings of the CVPR, 2015.
  36. Ordinal constraint binary coding for approximate nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence, (1):1–1, 2018.
  37. Ordinal constrained binary code learning for nearest neighbor search. In Proceedings of the AAAI, 2017.
  38. Towards optimal binary code learning via ordinal embedding. In Proceedings of the AAAI, 2016.
  39. Discrete graph hashing. In Proceedings of the NIPS, 2014.
  40. Supervised hashing with kernels. In Proceedings of the CVPR, 2012.
  41. Hashing with graphs. In Proceedings of the ICML, 2011.
  42. Minimal loss hashing for compact binary codes. In Proceedings of the ICML, 2011.
  43. Sparse coding and autoencoders. In 2018 IEEE International Symposium on Information Theory (ISIT), pages 36–40. IEEE, 2018.
  44. Spectralnet - spectral clustering using deep neural networks. Proceedings of the ICLR, 2018.
  45. Supervised discrete hashing. Proceedings of the CVPR, 2015.
  46. Unsupervised binary representation learning with deep variational networks. International Journal of Computer Vision, 2019.
  47. Greedy hash: towards fast optimization for accurate hash coding in cnn. In Proceedings of the NIPS, 2018.
  48. Small codes and large image databases for recognition. In Proceeding of the CVPR, 2008.
  49. A Survey on Learning to Hash. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
  50. Learning to Hash for Indexing Big Data - A Survey. Proceedings of the IEEE, 2016.
  51. Spectral hashing. In Proceedings of the NIPS, 2008.
  52. Energy-based generative adversarial network. In Proceeding of the ICLR, 2017.
  53. Bingan: Learning compact binary descriptors with a regularized gan. In Proceedings of the ICML, 2018.

Summary

We haven't generated a summary for this paper yet.