A Review of Deep Learning for Video Captioning (2304.11431v1)
Abstract: Video captioning (VC) is a fast-moving, cross-disciplinary area of research that bridges work in the fields of computer vision, NLP, linguistics, and human-computer interaction. In essence, VC involves understanding a video and describing it with language. Captioning is used in a host of applications from creating more accessible interfaces (e.g., low-vision navigation) to video question answering (V-QA), video retrieval and content generation. This survey covers deep learning-based VC, including but, not limited to, attention-based architectures, graph networks, reinforcement learning, adversarial networks, dense video captioning (DVC), and more. We discuss the datasets and evaluation metrics used in the field, and limitations, applications, challenges, and future directions for VC.
- Moloud Abdar (17 papers)
- Meenakshi Kollati (1 paper)
- Swaraja Kuraparthi (1 paper)
- Farhad Pourpanah (14 papers)
- Daniel McDuff (88 papers)
- Mohammad Ghavamzadeh (97 papers)
- Shuicheng Yan (275 papers)
- Abduallah Mohamed (10 papers)
- Abbas Khosravi (43 papers)
- Erik Cambria (136 papers)
- Fatih Porikli (141 papers)