Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transferable Decoding with Visual Entities for Zero-Shot Image Captioning (2307.16525v1)

Published 31 Jul 2023 in cs.CV and cs.CL

Abstract: Image-to-text generation aims to describe images using natural language. Recently, zero-shot image captioning based on pre-trained vision-LLMs (VLMs) and LLMs has made significant progress. However, we have observed and empirically demonstrated that these methods are susceptible to modality bias induced by LLMs and tend to generate descriptions containing objects (entities) that do not actually exist in the image but frequently appear during training (i.e., object hallucination). In this paper, we propose ViECap, a transferable decoding model that leverages entity-aware decoding to generate descriptions in both seen and unseen scenarios. ViECap incorporates entity-aware hard prompts to guide LLMs' attention toward the visual entities present in the image, enabling coherent caption generation across diverse scenes. With entity-aware hard prompts, ViECap is capable of maintaining performance when transferring from in-domain to out-of-domain scenarios. Extensive experiments demonstrate that ViECap sets a new state-of-the-art cross-domain (transferable) captioning and performs competitively in-domain captioning compared to previous VLMs-based zero-shot methods. Our code is available at: https://github.com/FeiElysia/ViECap

Zero-Shot Image Captioning with Transferable Decoding and Visual Entities

The paper "Transferable Decoding with Visual Entities for Zero-Shot Image Captioning" addresses the increasingly prominent task of image-to-text generation, specifically focusing on zero-shot image captioning leveraging pre-trained Vision-LLMs (VLMs) and LLMs. The research underscores the limitations posed by modality bias and object hallucination when deploying these models in zero-shot settings and introduces ViECap, a novel approach that integrates entity-aware decoding to bolster captioning across both seen and unseen domains.

Summary

The authors recognize significant strides in zero-shot image captioning, primarily driven by VLMs like CLIP and ALIGN, which demonstrate robust transferability across various discriminative tasks. However, challenges persist when adapting VLMs and LLMs to zero-shot generative tasks, such as image captioning, where modality bias often results in descriptions not pertinent to the provided images. The paper identifies that existing late-guidance methods, where visual cues are introduced post word prediction, contribute to this bias, while early-guidance approaches still struggle with object hallucination.

To overcome these challenges, the paper proposes ViECap, which embodies a transferable decoding framework utilizing entity-aware hard prompts. These prompts are desgined to direct LLMs' focus toward actual visual entities within images, thus enabling coherent and contextually relevant caption generation. ViECap leverages both entity-aware hard prompts and early-guidance mechanisms to maintain efficacy across traditional in-domain (ID) data as well as out-of-domain (OOD) scenarios. This dual strategy not only addresses the issues of modality bias and hallucination but also enhances the cross-domain applicability of the model.

Key Findings

Extensive experimentation validates the enhanced performance of ViECap over pre-existing zero-shot methods. Notably, the model achieves state-of-the-art results in cross-domain captioning, with a significant increase in CIDEr scores — particularly in OOD contexts, indicating superior transferability. This improvement is bolstered by the use of entity-aware prompts, which are derived from both training and inferred image entities, providing a robust mechanism for captioning novel visual instances.

The practical implications of the research are profound, offering a scalable solution to the data-hungry demands of traditional supervised image captioning models. By reducing reliance on paired image-text data and capitalizing on the extensive knowledge encapsulated within pre-trained VLMs and LLMs, ViECap presents a compelling case for adopting entity-aware decoding in zero-shot settings. Furthermore, the model's adaptability to low-data environments underscores its utility in resource-constrained applications, presenting opportunities for widespread deployment across diverse scenarios.

Implications and Future Directions

The integration of entity-aware prompts marks a notable advancement in the quest to achieve more generalized and accurate image-to-text generation. The ability to seamlessly extend a model's capabilities from in-domain to unlicensed contexts unlocks significant potential in applications ranging from content creation to accessibility tools.

Future research directions may explore optimizing the entity-aware prompting mechanism, possibly through adaptive learning strategies that dynamically select the most relevant entities for prompt construction. Additionally, further investigation into the balance between soft and hard prompts could provide deeper insights into the precise mechanisms underlying effective cross-domain transfer in generative models.

In conclusion, the paper presents a sophisticated approach to zero-shot image captioning, addressing critical issues of generalizability and accuracy. ViECap's methodological innovations and empirical results contribute substantially to the broader discussion on improving multi-modal AI systems' efficiency and effectiveness in diverse application domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Junjie Fei (6 papers)
  2. Teng Wang (92 papers)
  3. Jinrui Zhang (19 papers)
  4. Zhenyu He (57 papers)
  5. Chengjie Wang (178 papers)
  6. Feng Zheng (117 papers)
Citations (21)
Github Logo Streamline Icon: https://streamlinehq.com