SASA: Saliency-Aware Self-Adaptive Snapshot Compressive Imaging (2401.00875v1)
Abstract: The ability of snapshot compressive imaging (SCI) systems to efficiently capture high-dimensional (HD) data depends on the advent of novel optical designs to sample the HD data as two-dimensional (2D) compressed measurements. Nonetheless, the traditional SCI scheme is fundamentally limited, due to the complete disregard for high-level information in the sampling process. To tackle this issue, in this paper, we pave the first mile toward the advanced design of adaptive coding masks for SCI. Specifically, we propose an efficient and effective algorithm to generate coding masks with the assistance of saliency detection, in a low-cost and low-power fashion. Experiments demonstrate the effectiveness and efficiency of our approach. Code is available at: https://github.com/IndigoPurple/SASA
- X. Yuan, D. J. Brady, and A. K. Katsaggelos, “Snapshot compressive imaging: Theory, algorithms, and applications,” IEEE Signal Processing Magazine, vol. 38, no. 2, pp. 65–88, 2021.
- S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play admm for image restoration: Fixed-point convergence and applications,” IEEE Transactions on Computational Imaging, 2016.
- X. Yuan, “Generalized alternating projection based total variation minimization for compressive sensing,” in 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016, pp. 2539–2543.
- Y. Zhao, S. Zheng, and X. Yuan, “Deep equilibrium models for snapshot compressive imaging,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 3, 2023, pp. 3642–3650.
- Q. Yang and Y. Zhao, “Revisit dictionary learning for video compressive sensing under the plug-and-play framework,” in SPIE Proceedings, vol. 12166. SPIE, 2022, pp. 2018–2025.
- Z. Zhang, B. Zhang, X. Yuan, S. Zheng, X. Su, J. Suo, D. J. Brady, and Q. Dai, “From compressive sampling to compressive tasking: retrieving semantics in compressed domain with low bandwidth,” PhotoniX, vol. 3, no. 1, pp. 1–22, 2022.
- A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Applied optics, vol. 47, no. 10, pp. B44–B51, 2008.
- P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Optics express, vol. 21, no. 9, pp. 10 526–10 545, 2013.
- X. Yuan, Y. Liu, J. Suo, and Q. Dai, “Plug-and-play algorithms for large-scale snapshot compressive imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1447–1457.
- Y. Liu, X. Yuan, J. Suo, D. J. Brady, and Q. Dai, “Rank minimization for snapshot compressive imaging,” IEEE transactions on pattern analysis and machine intelligence, 2018.
- J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Compressive sensing by learning a Gaussian mixture model from measurements,” IEEE Transaction on Image Processing, vol. 24, no. 1, pp. 106–119, January 2015.
- J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Transaction on Image Processing, 2014.
- M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics, vol. 5, no. 3, p. 030801, 2020.
- S. Zheng, C. Wang, X. Yuan, and H. L. Xin, “Super-compression of large electron microscopy time series by deep compressive sensing learning,” Patterns, vol. 2, no. 7, p. 100292, 2021.
- L. Wang, M. Cao, Y. Zhong, and X. Yuan, “Spatial-temporal transformer for video snapshot compressive imaging,” arXiv preprint arXiv:2209.01578, 2022.
- Z. Cheng, B. Chen, R. Lu, Z. Wang, H. Zhang, Z. Meng, and X. Yuan, “Recurrent neural networks for snapshot compressive imaging,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
- Z. Meng and X. Yuan, “Perception inspired deep neural networks for spectral snapshot compressive imaging,” in 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021, pp. 2813–2817.
- Z. Meng, S. Jalali, and X. Yuan, “Gap-net for snapshot compressive imaging,” arXiv preprint arXiv:2012.08364, 2020.
- Z. Wu, J. Zhang, and C. Mou, “Dense deep unfolding network with 3d-cnn prior for snapshot compressive imaging,” arXiv preprint arXiv:2109.06548, 2021.
- Y. Zhao, “Mathematical cookbook for snapshot compressive imaging,” arXiv preprint arXiv:2202.07437, 2022.
- Y. Zhao, Q. Zeng, and E. Y. Lam, “Adaptive compressed sensing for real-time video compression, transmission, and reconstruction,” in IEEE International Conference on Data Science and Advanced Analytics (DSAA). IEEE, 2023.
- Z. Y. Chong, Y. Zhao, Z. Wang, and E. Y. Lam, “Solving inverse problems in compressive imaging with score-based generative models,” in IEEE International Conference on Data Science and Advanced Analytics (DSAA). IEEE, 2023.
- S. Lu, X. Yuan, and W. Shi, “Edge compression: An integrated framework for compressive imaging processing on cavs,” in 2020 IEEE/ACM Symposium on Edge Computing (SEC). IEEE, 2020, pp. 125–138.
- M.-M. Cheng, Z. Zhang, W.-Y. Lin, and P. Torr, “Bing: Binarized normed gradients for objectness estimation at 300fps,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 3286–3293.
- Y. Zhao, S. Zheng, and X. Yuan, “Deep equilibrium models for video snapshot compressive imaging,” arXiv preprint arXiv:2201.06931, 2022.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
- Y. Zhao, M. Ji, R. Huang, B. Wang, and S. Wang, “EFENet: Reference-based video super-resolution with enhanced flow estimation,” in Artificial Intelligence: First CAAI International Conference, CICAI 2021, Hangzhou, China, June 5–6, 2021, Proceedings, Part I 1. Springer, 2021, pp. 371–383.
- Y. Zhao, H. Zheng, Z. Wang, J. Luo, and E. Y. Lam, “MANet: improving video denoising with a multi-alignment network,” in 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022, pp. 2036–2040.
- Y. Zhao, G. Li, and E. Y. Lam, “Cross-camera human motion transfer by time series analysis,” IEEE International Conference on Acoustics, Speech and Signal Processing, 2024.
- Y. Zhao, H. Zheng, M. Ji, and R. Huang, “Cross-camera deep colorization,” in CAAI International Conference on Artificial Intelligence. Springer, 2022, pp. 3–17.
- Y. Zhao, H. Zheng, J. Luo, and E. Y. Lam, “Improving video colorization by test-time tuning,” in IEEE International Conference on Image Processing (ICIP), 2023.