Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models (2403.03003v1)

Published 5 Mar 2024 in cs.CV
Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models

Abstract: Despite remarkable progress, existing multimodal LLMs (MLLMs) are still inferior in granular visual recognition. Contrary to previous works, we study this problem from the perspective of image resolution, and reveal that a combination of low- and high-resolution visual features can effectively mitigate this shortcoming. Based on this observation, we propose a novel and efficient method for MLLMs, termed Mixture-of-Resolution Adaptation (MRA). In particular, MRA adopts two visual pathways for images with different resolutions, where high-resolution visual information is embedded into the low-resolution pathway via the novel mixture-of-resolution adapters (MR-Adapters). This design also greatly reduces the input sequence length of MLLMs. To validate MRA, we apply it to a recent MLLM called LLaVA, and term the new model LLaVA-HR. We conduct extensive experiments on 11 vision-language (VL) tasks, which show that LLaVA-HR outperforms existing MLLMs on 8 VL tasks, e.g., +9.4% on TextVQA. More importantly, both training and inference of LLaVA-HR remain efficient with MRA, e.g., 20 training hours and 3$\times$ inference speed than LLaVA-1.5. Source codes are released at: https://github.com/luogen1996/LLaVA-HR.

Mixture-of-Resolution Adaptation for Multimodal LLMs

The paper "Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal LLMs" presents a novel technique to enhance the capabilities of multimodal LLMs (MLLMs) in fine-grained visual recognition tasks. This approach, named Mixture-of-Resolution Adaptation (MRA), tackles the challenges in visual content comprehension by leveraging both high- and low-resolution image features.

Core Contributions

At its core, MRA introduces a dual-pathway design for image encoding, which simultaneously processes high-resolution and low-resolution visual information. This parallel processing is augmented by the Mixture-of-Resolution Adapters (MR-Adapters), which effectively embed high-resolution data into the low-resolution pathway, leading to reduced input sequence lengths and enhanced performance in granular visual recognition tasks.

Methodological Insights

The dual visual pathways conceptually replicate the human vision system's global and local processing mechanisms. High-resolution pathways are dedicated to capturing detailed semantic information, whereas low-resolution pathways deal with broader visual contexts. This dual approach strategically aligns with prior research suggesting multifaceted processing strategies improve visual recognition outcomes (Merigan & Maunsell, 1993).

The MR-Adapters serve as a bridge between these two pathways, facilitating efficient information exchange and integration of fine-grained features into a cohesive visual representation.

Empirical Validation

The practical efficacy of MRA is demonstrated through its integration into an MLLM known as LLaVA, resulting in the enhanced model LLaVA-HR. Empirical results across 11 vision-language tasks showcase LLaVA-HR's superiority over existing models, with notable performance improvements in tasks such as TextVQA (+9.4% accuracy). Crucially, these enhancements do not come at the expense of computational efficiency. The findings report that LLaVA-HR achieves training and inference speeds three times faster than its non-adapted counterpart, LLaVA-1.5, underscoring the approach's cost-effectiveness.

Practical Implications

The introduction of MRA bears significant implications for the deployment of MLLMs in applications requiring high-resolution image comprehension, such as autonomous driving, medical imaging, and augmented reality. By maintaining computational efficiency while benefiting from high-resolution image data, MRA broadens the scope of MLLM utility in practical, resource-constrained environments.

Future Directions

Given the promising outcomes associated with MRA, future research may explore further optimization of resolution pathways or integration with more complex visual recognition models to address evolving, high-dimensional visual tasks. Moreover, expanding upon the dual-pathway framework to incorporate additional modalities could amplify the adaptability and robustness of multimodal models in diverse application scenarios.

This paper contributes a compelling advancement in the methodological arsenal for MLLMs, balancing the dual imperatives of resolution-intensive tasks’ performance and model efficiency. Its release of source codes facilitates reproducibility and paves the way for future exploration and innovation in the domain of vision-LLMing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198, 2022.
  2. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966, 2023.
  3. Shikra: Unleashing multimodal llm’s referential dialogue magic. arXiv preprint arXiv:2306.15195, 2023a.
  4. Big self-supervised models are strong semi-supervised learners. Advances in neural information processing systems (NeurIPS), 33:22243–22255, 2020.
  5. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022.
  6. Pali-3 vision language models: Smaller, faster, stronger. arXiv preprint arXiv:2310.09199, 2023b.
  7. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
  8. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  9. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023.
  10. Fuyu-8B. https://www.adept.ai/blog/fuyu-8b, 2023.
  11. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056, 2023.
  12. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  6904–6913, 2017.
  13. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  3608–3617, 2018.
  14. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019.
  15. Openclip. July 2021. doi: 10.5281/zenodo.5143773. URL https://doi.org/10.5281/zenodo.5143773. If you use this software, please cite it as below.
  16. In defense of grid features for visual question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  10267–10276, 2020.
  17. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  18. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125, 2023a.
  19. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023b.
  20. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023c.
  21. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744, 2023a.
  22. Visual instruction tuning. In NeurIPS, 2023b.
  23. Llava-plus: Learning to use tools for creating multimodal agents, 2023c.
  24. A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  11976–11986, 2022.
  25. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265, 2019.
  26. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 2022.
  27. Cheap and quick: Efficient vision-language instruction tuning for large language models. Advances in neural information processing systems (NeurIPS), 2023a.
  28. A survivor in the era of large-scale pretraining: An empirical study of one-stage referring expression comprehension. IEEE Transactions on Multimedia, 2023b.
  29. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  30. How parallel are the primate visual pathways? Annual review of neuroscience, 16(1):369–402, 1993.
  31. Ocr-vqa: Visual question answering by reading text in images. In 2019 international conference on document analysis and recognition (ICDAR), pp.  947–952. IEEE, 2019.
  32. OpenAI. Gpt-4v(ision) system card. https://cdn.openai.com/papers/GPTV_System_Card.pdf, 2023.
  33. Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023.
  34. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
  35. Grounded sam: Assembling open-world models for diverse visual tasks, 2024.
  36. Neuropsychological contributions to theories of part/whole organization. Cognitive psychology, 23(2):299–330, 1991.
  37. Visual chain of thought: Bridging logical gaps with multimodal infillings. arXiv preprint arXiv:2305.02317, 2023.
  38. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  8317–8326, 2019.
  39. Eyes wide shut? exploring the visual shortcomings of multimodal llms. arXiv preprint arXiv:2401.06209, 2024.
  40. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  41. Multimodal transformer with multi-view visual representation for image captioning. IEEE transactions on circuits and systems for video technology, 30(12):4467–4480, 2019.
  42. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023.
  43. Vinvl: Revisiting visual representations in vision-language models. In CVPR, 2021.
  44. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Gen Luo (32 papers)
  2. Yiyi Zhou (38 papers)
  3. Yuxin Zhang (91 papers)
  4. Xiawu Zheng (63 papers)
  5. Xiaoshuai Sun (91 papers)
  6. Rongrong Ji (315 papers)
Citations (34)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com