Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Motion-aware Memory Network for Fast Video Salient Object Detection (2208.00946v2)

Published 1 Aug 2022 in cs.CV

Abstract: Previous methods based on 3DCNN, convLSTM, or optical flow have achieved great success in video salient object detection (VSOD). However, they still suffer from high computational costs or poor quality of the generated saliency maps. To solve these problems, we design a space-time memory (STM)-based network, which extracts useful temporal information of the current frame from adjacent frames as the temporal branch of VSOD. Furthermore, previous methods only considered single-frame prediction without temporal association. As a result, the model may not focus on the temporal information sufficiently. Thus, we initially introduce object motion prediction between inter-frame into VSOD. Our model follows standard encoder--decoder architecture. In the encoding stage, we generate high-level temporal features by using high-level features from the current and its adjacent frames. This approach is more efficient than the optical flow-based methods. In the decoding stage, we propose an effective fusion strategy for spatial and temporal branches. The semantic information of the high-level features is used to fuse the object details in the low-level features, and then the spatiotemporal features are obtained step by step to reconstruct the saliency maps. Moreover, inspired by the boundary supervision commonly used in image salient object detection (ISOD), we design a motion-aware loss for predicting object boundary motion and simultaneously perform multitask learning for VSOD and object motion prediction, which can further facilitate the model to extract spatiotemporal features accurately and maintain the object integrity. Extensive experiments on several datasets demonstrated the effectiveness of our method and can achieve state-of-the-art metrics on some datasets. The proposed model does not require optical flow or other preprocessing, and can reach a speed of nearly 100 FPS during inference.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. H. Hadizadeh and I. V. Bajić, “Saliency-aware video compression,” IEEE Transactions on Image Processing, vol. 23, no. 1, pp. 19–33, 2013.
  2. S. Jain and K. Grauman, “Click carving: Segmenting objects in video with point clicks,” in Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 4, 2016, pp. 89–98.
  3. H. Wu, G. Li, and X. Luo, “Weighted attentional blocks for probabilistic object tracking,” The Visual Computer, vol. 30, no. 2, pp. 229–243, 2014.
  4. R. Zhao, W. Oyang, and X. Wang, “Person re-identification by saliency learning,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 2, pp. 356–370, 2016.
  5. M. Zhang, J. Liu, Y. Wang, Y. Piao, S. Yao, W. Ji, J. Li, H. Lu, and Z. Luo, “Dynamic context-sensitive filtering network for video salient object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1553–1563.
  6. C. Chen, G. Wang, C. Peng, Y. Fang, D. Zhang, and H. Qin, “Exploring rich and efficient spatial temporal interactions for real-time video salient object detection,” IEEE Transactions on Image Processing, vol. 30, pp. 3995–4007, 2021.
  7. Y. Jiao, X. Wang, Y.-C. Chou, S. Yang, G.-P. Ji, R. Zhu, and G. Gao, “Guidance and teaching network for video salient object detection,” in 2021 IEEE International Conference on Image Processing (ICIP).   IEEE, 2021, pp. 2199–2203.
  8. E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2462–2470.
  9. D.-P. Fan, W. Wang, M.-M. Cheng, and J. Shen, “Shifting more attention to video salient object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8554–8564.
  10. W. Wang, J. Shen, and L. Shao, “Consistent video saliency using local gradient flow optimization and global refinement,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 4185–4196, 2015.
  11. T. Xi, W. Zhao, H. Wang, and W. Lin, “Salient object detection with spatiotemporal background priors for video,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3425–3436, 2016.
  12. Z. Liu, X. Zhang, S. Luo, and O. Le Meur, “Superpixel-based spatiotemporal saliency detection,” IEEE transactions on circuits and systems for video technology, vol. 24, no. 9, pp. 1522–1540, 2014.
  13. M. Xu, L. Jiang, X. Sun, Z. Ye, and Z. Wang, “Learning to detect video saliency with hevc features,” IEEE Transactions on Image Processing, vol. 26, no. 1, pp. 369–385, 2016.
  14. W. Wang, J. Shen, and L. Shao, “Video salient object detection via fully convolutional networks,” IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 38–49, 2017.
  15. G. Li, Y. Xie, T. Wei, K. Wang, and L. Lin, “Flow guided recurrent neural encoder for video salient object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3243–3252.
  16. H. Song, W. Wang, S. Zhao, J. Shen, and K.-M. Lam, “Pyramid dilated deeper convlstm for video salient object detection,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 715–731.
  17. C. Chen, G. Wang, C. Peng, X. Zhang, and H. Qin, “Improved robust video saliency detection based on long-term spatial-temporal information,” IEEE transactions on image processing, vol. 29, pp. 1090–1100, 2019.
  18. T.-N. Le and A. Sugimoto, “Video salient object detection using spatiotemporal deep features,” IEEE Transactions on Image Processing, vol. 27, no. 10, pp. 5002–5015, 2018.
  19. H. Li, G. Chen, G. Li, and Y. Yu, “Motion guided attention for video salient object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 7274–7283.
  20. W. Zhao, J. Zhang, L. Li, N. Barnes, N. Liu, and J. Han, “Weakly supervised video salient object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 826–16 835.
  21. Y. Gu, L. Wang, Z. Wang, Y. Liu, M.-M. Cheng, and S.-P. Lu, “Pyramid constrained self-attention network for fast video salient object detection,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 10 869–10 876.
  22. S. Ren, C. Han, X. Yang, G. Han, and S. He, “Tenet: Triple excitation network for video salient object detection,” in European Conference on Computer Vision.   Springer, 2020, pp. 212–228.
  23. G.-P. Ji, K. Fu, Z. Wu, D.-P. Fan, J. Shen, and L. Shao, “Full-duplex strategy for video object segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4922–4933.
  24. A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher, “Ask me anything: Dynamic memory networks for natural language processing,” in International conference on machine learning.   PMLR, 2016, pp. 1378–1387.
  25. A. Miller, A. Fisch, J. Dodge, A.-H. Karimi, A. Bordes, and J. Weston, “Key-value memory networks for directly reading documents,” arXiv preprint arXiv:1606.03126, 2016.
  26. S. Sukhbaatar, J. Weston, R. Fergus et al., “End-to-end memory networks,” Advances in neural information processing systems, vol. 28, 2015.
  27. S. Na, S. Lee, J. Kim, and G. Kim, “A read-write memory network for movie story understanding,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 677–685.
  28. C. Chunseong Park, B. Kim, and G. Kim, “Attend to you: Personalized image captioning with context sequence memory networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 895–903.
  29. T. Yang and A. B. Chan, “Learning dynamic memory networks for object tracking,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 152–167.
  30. S. W. Oh, J.-Y. Lee, N. Xu, and S. J. Kim, “Video object segmentation using space-time memory networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 9226–9235.
  31. H. Seong, J. Hyun, and E. Kim, “Kernelized memory network for video object segmentation,” in European Conference on Computer Vision.   Springer, 2020, pp. 629–645.
  32. ——, “Video object segmentation using kernelized memory network with multiple kernels,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  33. H. Xie, H. Yao, S. Zhou, S. Zhang, and W. Sun, “Efficient regional memory network for video object segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1286–1295.
  34. L. Hu, P. Zhang, B. Zhang, P. Pan, Y. Xu, and R. Jin, “Learning position and target consistency for memory-based video object segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4144–4154.
  35. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  36. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  37. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  38. L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
  39. P. Yan, G. Li, Y. Xie, Z. Li, C. Wang, T. Chen, and L. Lin, “Semi-supervised video salient object detection using pseudo-labels,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 7284–7293.
  40. J. Wei, S. Wang, and Q. Huang, “F33{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPTnet: fusion, feedback and focus for salient object detection,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 12 321–12 328.
  41. Z. Chen, Q. Xu, R. Cong, and Q. Huang, “Global context-aware progressive aggregation network for salient object detection,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 10 599–10 606.
  42. J.-J. Liu, Q. Hou, M.-M. Cheng, J. Feng, and J. Jiang, “A simple pooling-based design for real-time salient object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3917–3926.
  43. J. Wei, S. Wang, Z. Wu, C. Su, Q. Huang, and Q. Tian, “Label decoupling framework for salient object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 13 025–13 034.
  44. H. Zhou, X. Xie, J.-H. Lai, Z. Chen, and L. Yang, “Interactive two-stream decoder for accurate and fast saliency detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9141–9150.
  45. W. Wang, S. Zhao, J. Shen, S. C. Hoi, and A. Borji, “Salient object detection with pyramid attention and salient edges,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 1448–1457.
  46. X. Li, F. Yang, H. Cheng, W. Liu, and D. Shen, “Contour knowledge transfer for salient object detection,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 355–370.
  47. Z. Wu, L. Su, and Q. Huang, “Stacked cross refinement network for edge-aware salient object detection,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 7264–7273.
  48. J. Su, J. Li, Y. Zhang, C. Xia, and Y. Tian, “Selectivity or invariance: Boundary-aware salient object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3799–3808.
  49. J.-J. Liu, Q. Hou, and M.-M. Cheng, “Dynamic feature integration for simultaneous detection of salient object, edge and skeleton,” arXiv preprint arXiv:2004.08595, 2020.
  50. Y. Tang, W. Zou, Z. Jin, Y. Chen, Y. Hua, and X. Li, “Weakly supervised salient object detection with spatiotemporal cascade neural networks,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 7, pp. 1973–1984, 2018.
  51. Y. Chen, W. Zou, Y. Tang, X. Li, C. Xu, and N. Komodakis, “Scom: Spatiotemporal constrained optimization for salient object detection,” IEEE Transactions on Image Processing, vol. 27, no. 7, pp. 3345–3357, 2018.
  52. S. Li, B. Seybold, A. Vorobyov, X. Lei, and C.-C. J. Kuo, “Unsupervised video object segmentation with motion-based bilateral networks,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 207–223.
  53. X. Qin, Z. Zhang, C. Huang, C. Gao, M. Dehghan, and M. Jagersand, “Basnet: Boundary-aware salient object detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 7479–7489.
  54. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  55. J. Yu, Y. Jiang, Z. Wang, Z. Cao, and T. Huang, “Unitbox: An advanced object detection network,” in Proceedings of the 24th ACM international conference on Multimedia, 2016, pp. 516–520.
  56. F. Li, T. Kim, A. Humayun, D. Tsai, and J. M. Rehg, “Video segmentation by tracking many figure-ground segments,” in Proceedings of the IEEE international conference on computer vision, 2013, pp. 2192–2199.
  57. T. Brox and J. Malik, “Object segmentation by long term analysis of point trajectories,” in European conference on computer vision.   Springer, 2010, pp. 282–295.
  58. F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung, “A benchmark dataset and evaluation methodology for video object segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 724–732.
  59. Z. Wang, J. Li, and Z. Pan, “Cross complementary fusion network for video salient object detection,” IEEE Access, vol. 8, pp. 201 259–201 270, 2020.
  60. F. Perazzi, P. Krähenbühl, Y. Pritch, and A. Hornung, “Saliency filters: Contrast based filtering for salient region detection,” in 2012 IEEE conference on computer vision and pattern recognition.   IEEE, 2012, pp. 733–740.
  61. D.-P. Fan, M.-M. Cheng, Y. Liu, T. Li, and A. Borji, “Structure-measure: A new way to evaluate foreground maps,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4548–4557.
  62. R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in 2009 IEEE conference on computer vision and pattern recognition.   IEEE, 2009, pp. 1597–1604.
  63. L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan, “Learning to detect salient objects with image-level supervision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 136–145.
  64. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
Citations (4)

Summary

We haven't generated a summary for this paper yet.