Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning (2308.12383v1)

Published 23 Aug 2023 in cs.CV, cs.AI, cs.CL, and cs.MM

Abstract: Image captioning, like many tasks involving vision and language, currently relies on Transformer-based architectures for extracting the semantics in an image and translating it into linguistically coherent descriptions. Although successful, the attention operator only considers a weighted summation of projections of the current input sample, therefore ignoring the relevant semantic information which can come from the joint observation of other samples. In this paper, we devise a network which can perform attention over activations obtained while processing other training samples, through a prototypical memory model. Our memory models the distribution of past keys and values through the definition of prototype vectors which are both discriminative and compact. Experimentally, we assess the performance of the proposed model on the COCO dataset, in comparison with carefully designed baselines and state-of-the-art approaches, and by investigating the role of each of the proposed components. We demonstrate that our proposal can increase the performance of an encoder-decoder Transformer by 3.7 CIDEr points both when training in cross-entropy only and when fine-tuning with self-critical sequence training. Source code and trained models are available at: https://github.com/aimagelab/PMA-Net.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Manuele Barraco (3 papers)
  2. Sara Sarto (12 papers)
  3. Marcella Cornia (61 papers)
  4. Lorenzo Baraldi (68 papers)
  5. Rita Cucchiara (142 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.