Papers
Topics
Authors
Recent
Search
2000 character limit reached

Where to put the Image in an Image Caption Generator

Published 27 Mar 2017 in cs.NE, cs.CL, and cs.CV | (1703.09137v2)

Abstract: When a recurrent neural network LLM is used for caption generation, the image information can be fed to the neural network either by directly incorporating it in the RNN -- conditioning the LLM by injecting' image features -- or in a layer following the RNN -- conditioning the LLM bymerging' image features. While both options are attested in the literature, there is as yet no systematic comparison between the two. In this paper we empirically show that it is not especially detrimental to performance whether one architecture is used or another. The merge architecture does have practical advantages, as conditioning by merging allows the RNN's hidden state vector to shrink in size by up to four times. Our results suggest that the visual and linguistic modalities for caption generation need not be jointly encoded by the RNN as that yields large, memory-intensive models with few tangible advantages in performance; rather, the multimodal integration should be delayed to a subsequent stage.

Citations (96)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.