Learning Audio-Visual embedding for Person Verification in the Wild (2209.04093v2)
Abstract: It has already been observed that audio-visual embedding is more robust than uni-modality embedding for person verification. Here, we proposed a novel audio-visual strategy that considers aggregators from a fusion perspective. First, we introduced weight-enhanced attentive statistics pooling for the first time in face verification. We find that a strong correlation exists between modalities during pooling, so joint attentive pooling is proposed which contains cycle consistency to learn the implicit inter-frame weight. Finally, each modality is fused with a gated attention mechanism to gain robust audio-visual embedding. All the proposed models are trained on the VoxCeleb2 dev dataset and the best system obtains 0.18%, 0.27%, and 0.49% EER on three official trial lists of VoxCeleb1 respectively, which is to our knowledge the best-published results for person verification.
- Peiwen Sun (11 papers)
- Shanshan Zhang (36 papers)
- Zishan Liu (2 papers)
- Yougen Yuan (4 papers)
- Taotao Zhang (2 papers)
- Honggang Zhang (108 papers)
- Pengfei Hu (54 papers)