Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bangla Image Caption Generation through CNN-Transformer based Encoder-Decoder Network (2110.12442v1)

Published 24 Oct 2021 in cs.CV and cs.AI

Abstract: Automatic Image Captioning is the never-ending effort of creating syntactically and validating the accuracy of textual descriptions of an image in natural language with context. The encoder-decoder structure used throughout existing Bengali Image Captioning (BIC) research utilized abstract image feature vectors as the encoder's input. We propose a novel transformer-based architecture with an attention mechanism with a pre-trained ResNet-101 model image encoder for feature extraction from images. Experiments demonstrate that the language decoder in our technique captures fine-grained information in the caption and, then paired with image features, produces accurate and diverse captions on the BanglaLekhaImageCaptions dataset. Our approach outperforms all existing Bengali Image Captioning work and sets a new benchmark by scoring 0.694 on BLEU-1, 0.630 on BLEU-2, 0.582 on BLEU-3, and 0.337 on METEOR.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Md Aminul Haque Palash (5 papers)
  2. MD Abdullah Al Nasim (27 papers)
  3. Sourav Saha (16 papers)
  4. Faria Afrin (3 papers)
  5. Raisa Mallik (1 paper)
  6. Sathishkumar Samiappan (1 paper)
Citations (9)

Summary

We haven't generated a summary for this paper yet.