Audio-visual Saliency for Omnidirectional Videos (2311.05190v1)
Abstract: Visual saliency prediction for omnidirectional videos (ODVs) has shown great significance and necessity for omnidirectional videos to help ODV coding, ODV transmission, ODV rendering, etc.. However, most studies only consider visual information for ODV saliency prediction while audio is rarely considered despite its significant influence on the viewing behavior of ODV. This is mainly due to the lack of large-scale audio-visual ODV datasets and corresponding analysis. Thus, in this paper, we first establish the largest audio-visual saliency dataset for omnidirectional videos (AVS-ODV), which comprises the omnidirectional videos, audios, and corresponding captured eye-tracking data for three video sound modalities including mute, mono, and ambisonics. Then we analyze the visual attention behavior of the observers under various omnidirectional audio modalities and visual scenes based on the AVS-ODV dataset. Furthermore, we compare the performance of several state-of-the-art saliency prediction models on the AVS-ODV dataset and construct a new benchmark. Our AVS-ODV datasets and the benchmark will be released to facilitate future research.
- Yuxin Zhu (11 papers)
- Xilei Zhu (6 papers)
- Huiyu Duan (38 papers)
- Jie Li (553 papers)
- Kaiwei Zhang (11 papers)
- Yucheng Zhu (20 papers)
- Li Chen (590 papers)
- Xiongkuo Min (139 papers)
- Guangtao Zhai (231 papers)