Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hy-Tracker: A Novel Framework for Enhancing Efficiency and Accuracy of Object Tracking in Hyperspectral Videos (2311.18199v1)

Published 30 Nov 2023 in cs.CV

Abstract: Hyperspectral object tracking has recently emerged as a topic of great interest in the remote sensing community. The hyperspectral image, with its many bands, provides a rich source of material information of an object that can be effectively used for object tracking. While most hyperspectral trackers are based on detection-based techniques, no one has yet attempted to employ YOLO for detecting and tracking the object. This is due to the presence of multiple spectral bands, the scarcity of annotated hyperspectral videos, and YOLO's performance limitation in managing occlusions, and distinguishing object in cluttered backgrounds. Therefore, in this paper, we propose a novel framework called Hy-Tracker, which aims to bridge the gap between hyperspectral data and state-of-the-art object detection methods to leverage the strengths of YOLOv7 for object tracking in hyperspectral videos. Hy-Tracker not only introduces YOLOv7 but also innovatively incorporates a refined tracking module on top of YOLOv7. The tracker refines the initial detections produced by YOLOv7, leading to improved object-tracking performance. Furthermore, we incorporate Kalman-Filter into the tracker, which addresses the challenges posed by scale variation and occlusion. The experimental results on hyperspectral benchmark datasets demonstrate the effectiveness of Hy-Tracker in accurately tracking objects across frames.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. D. Cerutti-Maori, I. Sikaneta, and C. H. Gierull, “Optimum sar/gmti processing and its application to the radar satellite radarsat-2 for traffic monitoring,” IEEE Transactions on Geoscience and Remote Sensing, pp. 3868–3881, 2012.
  2. L. Gao, P. Liu, Y. Jiang, W. Xie, J. Lei, Y. Li, and Q. Du, “Cbff-net: A new framework for efficient and accurate hyperspectral object tracking,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–14, 2023.
  3. J. Shao, B. Du, C. Wu, and L. Zhang, “Tracking objects from satellite videos: A velocity feature based correlation filter,” IEEE Transactions on Geoscience and Remote Sensing, pp. 7860–7871, 2019.
  4. J. Yang, Z. Pan, Z. Wang, B. Lei, and Y. Hu, “Siammdm: An adaptive fusion network with dynamic template for real-time satellite video single object tracking,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–19, 2023.
  5. Y. Su, S. Mei, G. Zhang, Y. Wang, M. He, and Q. Du, “Gaussian information entropy based band reduction for unsupervised hyperspectral video tracking,” pp. 791–794, 2022.
  6. Y. Wang, Y. Liu, G. Zhang, Y. Su, S. Zhang, and S. Mei, “Spectral-spatial-aware transformer fusion network for hyperspectral object tracking,” pp. 1–5, 2022.
  7. C. Yu, R. Han, M. Song, C. Liu, and C.-I. Chang, “Feedback attention-based dense cnn for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–16, 2021.
  8. C. Zhao, H. Liu, N. Su, L. Wang, and Y. Yan, “Ranet: A reliability-guided aggregation network for hyperspectral and rgb fusion tracking,” Remote Sensing, p. 2765, 2022.
  9. C. Zhao, H. Liu, N. Su, and Y. Yan, “Tftn: A transformer-based fusion tracking framework of hyperspectral and rgb,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–15, 2022.
  10. W. Li, Z. Hou, J. Zhou, and R. Tao, “Siambag: Band attention grouping-based siamese object tracking network for hyperspectral videos,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–12, 2023.
  11. K. Qian, J. Zhou, F. Xiong, H. Zhou, and J. Du, “Object tracking in hyperspectral videos with convolutional features and kernelized correlation filter,” pp. 308–319, 2018.
  12. J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE transactions on pattern analysis and machine intelligence, pp. 583–596, 2014.
  13. F. Xiong, J. Zhou, and Y. Qian, “Material based object tracking in hyperspectral videos,” IEEE Transactions on Image Processing, pp. 3719–3733, 2020.
  14. L. Chen, Y. Zhao, J. Yao, J. Chen, N. Li, J. C.-W. Chan, and S. G. Kong, “Object tracking in hyperspectral-oriented video with fast spatial-spectral features,” Remote Sensing, p. 1922, 2021.
  15. L. Chen, Y. Zhao, J. C.-W. Chan, and S. G. Kong, “Histograms of oriented mosaic gradients for snapshot spectral image description,” ISPRS Journal of Photogrammetry and Remote Sensing, pp. 79–93, 2022.
  16. Y. Tang, Y. Liu, and H. Huang, “Target-aware and spatial-spectral discriminant feature joint correlation filters for hyperspectral video object tracking,” Computer Vision and Image Understanding, p. 103535, 2022.
  17. Z. Hou, W. Li, J. Zhou, and R. Tao, “Spatial–spectral weighted and regularized tensor sparse correlation filter for object tracking in hyperspectral videos,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–12, 2022.
  18. B. Uzkent, A. Rangnekar, and M. Hoffman, “Aerial vehicle tracking by adaptive fusion of hyperspectral likelihood maps,” pp. 39–48, 2017.
  19. B. Uzkent, A. Rangnekar, and M. J. Hoffman, “Tracking in aerial hyperspectral videos using deep kernelized correlation filters,” IEEE Transactions on Geoscience and Remote Sensing, pp. 449–461, 2018.
  20. Z. Liu, X. Wang, M. Shu, G. Li, C. Sun, Z. Liu, and Y. Zhong, “An anchor-free siamese target tracking network for hyperspectral video,” pp. 1–5, 2021.
  21. Z. Liu, Y. Zhong, X. Wang, M. Shu, and L. Zhang, “Unsupervised deep hyperspectral video target tracking and high spectral-spatial-temporal resolution (h33{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPT) benchmark dataset,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–14, 2021.
  22. Z. Li, F. Xiong, J. Zhou, J. Wang, J. Lu, and Y. Qian, “Bae-net: A band attention aware ensemble network for hyperspectral object tracking,” pp. 2106–2110, 2020.
  23. Y. Song, C. Ma, X. Wu, L. Gong, L. Bao, W. Zuo, C. Shen, R. W. Lau, and M.-H. Yang, “Vital: Visual tracking via adversarial learning,” pp. 8990–8999, 2018.
  24. Z. Li, X. Ye, F. Xiong, J. Lu, J. Zhou, and Y. Qian, “Spectral-spatial-temporal attention network for hyperspectral tracking,” pp. 1–5, 2021.
  25. Z. Li, F. Xiong, J. Lu, J. Zhou, and Y. Qian, “Material-guided siamese fusion network for hyperspectral object tracking,” pp. 2809–2813, 2022.
  26. Y. Tang, H. Huang, Y. Liu, and Y. Li, “A siamese network-based tracking framework for hyperspectral video,” Neural Computing and Applications, pp. 2381–2397, 2023.
  27. Z. Liu, X. Wang, Y. Zhong, M. Shu, and C. Sun, “Siamhyper: Learning a hyperspectral object tracker from an rgb-based tracker,” IEEE Transactions on Image Processing, pp. 7116–7129, 2022.
  28. J. Lei, P. Liu, W. Xie, L. Gao, Y. Li, and Q. Du, “Spatial–spectral cross-correlation embedded dual-transfer network for object tracking using hyperspectral videos,” Remote Sensing, p. 3512, 2022.
  29. Y. Tang, Y. Liu, L. Ji, and H. Huang, “Robust hyperspectral object tracking by exploiting background-aware spectral information with band selection network,” IEEE Geoscience and Remote Sensing Letters, pp. 1–5, 2022.
  30. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” pp. 779–788, 2016.
  31. C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” pp. 7464–7475, 2023.
  32. M. A. Islam, J. Zhou, W. Zhang, and Y. Gao, “Background-aware band selection for object tracking in hyperspectral videos,” IEEE Geoscience and Remote Sensing Letters, 2023.
  33. H. V. Nguyen and L. Bai, “Cosine similarity metric learning for face verification,” pp. 709–720, 2010.
  34. H. Nam and B. Han, “Learning multi-domain convolutional neural networks for visual tracking,” pp. 4293–4302, 2016.
  35. M. Khodarahmi and V. Maihami, “A review on kalman filter models,” Archives of Computational Methods in Engineering, pp. 727–747, 2023.
  36. Z. Chen, B. Zhong, G. Li, S. Zhang, and R. Ji, “Siamese box adaptive network for visual tracking,” pp. 6668–6677, 2020.
  37. D. Guo, Y. Shao, Y. Cui, Z. Wang, L. Zhang, and C. Shen, “Graph attention tracking,” pp. 9543–9552, 2021.
  38. D. Guo, J. Wang, Y. Cui, Z. Wang, and S. Chen, “Siamcar: Siamese fully convolutional classification and regression for visual tracking,” pp. 6269–6277, 2020.
  39. B. Yan, H. Peng, J. Fu, D. Wang, and H. Lu, “Learning spatio-temporal transformer for visual tracking,” pp. 10 448–10 457, 2021.
Citations (2)

Summary

We haven't generated a summary for this paper yet.