Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards More Unified In-context Visual Understanding (2312.02520v2)

Published 5 Dec 2023 in cs.CV

Abstract: The rapid advancement of LLMs has accelerated the emergence of in-context learning (ICL) as a cutting-edge approach in the natural language processing domain. Recently, ICL has been employed in visual understanding tasks, such as semantic segmentation and image captioning, yielding promising results. However, existing visual ICL framework can not enable producing content across multiple modalities, which limits their potential usage scenarios. To address this issue, we present a new ICL framework for visual understanding with multi-modal output enabled. First, we quantize and embed both text and visual prompt into a unified representational space, structured as interleaved in-context sequences. Then a decoder-only sparse transformer architecture is employed to perform generative modeling on them, facilitating in-context learning. Thanks to this design, the model is capable of handling in-context vision understanding tasks with multimodal output in a unified pipeline.Experimental results demonstrate that our model achieves competitive performance compared with specialized models and previous ICL baselines. Overall, our research takes a further step toward unified multimodal in-context learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Dianmo Sheng (5 papers)
  2. Dongdong Chen (164 papers)
  3. Zhentao Tan (24 papers)
  4. Qiankun Liu (14 papers)
  5. Qi Chu (52 papers)
  6. Jianmin Bao (65 papers)
  7. Tao Gong (34 papers)
  8. Bin Liu (441 papers)
  9. Shengwei Xu (8 papers)
  10. Nenghai Yu (173 papers)
Citations (5)