Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision (2103.04037v2)

Published 6 Mar 2021 in cs.CV and cs.CL

Abstract: Transformer architectures have brought about fundamental changes to computational linguistic field, which had been dominated by recurrent neural networks for many years. Its success also implies drastic changes in cross-modal tasks with language and vision, and many researchers have already tackled the issue. In this paper, we review some of the most critical milestones in the field, as well as overall trends on how transformer architecture has been incorporated into visuolinguistic cross-modal tasks. Furthermore, we discuss its current limitations and speculate upon some of the prospects that we find imminent.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Andrew Shin (12 papers)
  2. Masato Ishii (14 papers)
  3. Takuya Narihira (18 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.