Exploration of Visual Features and their weighted-additive fusion for Video Captioning (2101.05806v1)
Abstract: Video captioning is a popular task that challenges models to describe events in videos using natural language. In this work, we investigate the ability of various visual feature representations derived from state-of-the-art convolutional neural networks to capture high-level semantic context. We introduce the Weighted Additive Fusion Transformer with Memory Augmented Encoders (WAFTM), a captioning model that incorporates memory in a transformer encoder and uses a novel method, to fuse features, that ensures due importance is given to more significant representations. We illustrate a gain in performance realized by applying Word-Piece Tokenization and a popular REINFORCE algorithm. Finally, we benchmark our model on two datasets and obtain a CIDEr of 92.4 on MSVD and a METEOR of 0.091 on the ActivityNet Captions Dataset.
- Praveen S V (2 papers)
- Akhilesh Bharadwaj (1 paper)
- Harsh Raj (10 papers)
- Janhavi Dadhania (2 papers)
- Ganesh Samarth C. A (2 papers)
- Nikhil Pareek (2 papers)
- S R M Prasanna (4 papers)