Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Effectiveness Assessment of Recent Large Vision-Language Models (2403.04306v5)

Published 7 Mar 2024 in cs.CV, cs.AI, and cs.LG

Abstract: The advent of large vision-LLMs (LVLMs) represents a remarkable advance in the quest for artificial general intelligence. However, the model's effectiveness in both specialized and general tasks warrants further investigation. This paper endeavors to evaluate the competency of popular LVLMs in specialized and general tasks, respectively, aiming to offer a comprehensive understanding of these novel models. To gauge their effectiveness in specialized tasks, we employ six challenging tasks in three different application scenarios: natural, healthcare, and industrial. These six tasks include salient/camouflaged/transparent object detection, as well as polyp detection, skin lesion detection, and industrial anomaly detection. We examine the performance of three recent open-source LVLMs, including MiniGPT-v2, LLaVA-1.5, and Shikra, on both visual recognition and localization in these tasks. Moreover, we conduct empirical investigations utilizing the aforementioned LVLMs together with GPT-4V, assessing their multi-modal understanding capabilities in general tasks including object counting, absurd question answering, affordance reasoning, attribute recognition, and spatial relation reasoning. Our investigations reveal that these LVLMs demonstrate limited proficiency not only in specialized tasks but also in general tasks. We delve deep into this inadequacy and uncover several potential factors, including limited cognition in specialized tasks, object hallucination, text-to-image interference, and decreased robustness in complex problems. We hope that this study can provide useful insights for the future development of LVLMs, helping researchers improve LVLMs for both general and specialized applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” in Advances in Neural Information Processing Systems, 2020, pp. 1877–1901.
  2. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
  3. H. Liu, C. Li, Q. Wu, and Y. J. Lee, “Visual instruction tuning,” in Advances in Neural Information Processing Systems, vol. 36, 2024.
  4. J. Chen, D. Zhu, X. Shen, X. Li, Z. Liu, P. Zhang, R. Krishnamoorthi, V. Chandra, Y. Xiong, and M. Elhoseiny, “Minigpt-v2: large language model as a unified interface for vision-language multi-task learning,” arXiv preprint arXiv:2310.09478, 2023.
  5. J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
  6. C. Fu, R. Zhang, H. Lin, Z. Wang, T. Gao, Y. Luo, Y. Huang, Z. Zhang, L. Qiu, G. Ye et al., “A challenger to gpt-4v? early explorations of gemini in visual expertise,” arXiv preprint arXiv:2312.12436, 2023.
  7. H. Qin, G.-P. Ji, S. Khan, D.-P. Fan, F. S. Khan, and L. V. Gool, “How good is google bard’s visual understanding? an empirical study on open challenges,” Machine Intelligence Research, vol. 20, p. 605–613, 2023.
  8. Z. Gu, B. Zhu, G. Zhu, Y. Chen, M. Tang, and J. Wang, “Anomalygpt: Detecting industrial anomalies using large vision-language models,” arXiv preprint arXiv:2308.15366, 2023.
  9. J. Qiu, L. Li, J. Sun, J. Peng, P. Shi, R. Zhang, Y. Dong, K. Lam, F. P.-W. Lo, B. Xiao, W. Yuan, N. Wang, D. Xu, and B. Lo, “Large ai models in health informatics: Applications, challenges, and the future,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 12, pp. 6074–6087, 2023.
  10. J. Zhang, X. Chen, Z. Xue, Y. Wang, C. Wang, and Y. Liu, “Exploring grounding potential of vqa-oriented gpt-4v for zero-shot anomaly detection,” arXiv preprint arXiv:2311.02612, 2023.
  11. L. Tang, P.-T. Jiang, Z. Shen, H. Zhang, J. Chen, and B. Li, “Generalization and hallucination of large vision-language models through a camouflaged lens,” arXiv preprint arXiv:2311.11273, 2023.
  12. H. Liu, C. Li, Y. Li, and Y. J. Lee, “Improved baselines with visual instruction tuning,” arXiv preprint arXiv:2310.03744, 2023.
  13. K. Chen, Z. Zhang, W. Zeng, R. Zhang, F. Zhu, and R. Zhao, “Shikra: Unleashing multimodal llm’s referential dialogue magic,” arXiv preprint arXiv:2306.15195, 2023.
  14. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European Conference on Computer Vision, 2014, pp. 740–755.
  15. D.-P. Fan, M.-M. Cheng, J.-J. Liu, S.-H. Gao, Q. Hou, and A. Borji, “Salient objects in clutter: Bringing salient object detection to the foreground,” in European Conference on Computer Vision, 2018, pp. 186–202.
  16. E. Xie, W. Wang, W. Wang, M. Ding, C. Shen, and P. Luo, “Segmenting transparent objects in the wild,” in European Conference on Computer Vision, 2020, pp. 696–711.
  17. D.-P. Fan, G.-P. Ji, M.-M. Cheng, and L. Shao, “Concealed object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, pp. 6024–6042, 2021.
  18. G.-P. Ji, D.-P. Fan, P. Xu, M.-M. Cheng, B. Zhou, and L. Van Gool, “Sam struggles in concealed scenes–empirical study on” segment anything”,” SCIENCE CHINA Information Sciences, vol. 66, no. 12, pp. 226 101–, 2023.
  19. N. C. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler et al., “Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic),” in IEEE International Symposium on Biomedical Imaging, 2018, pp. 168–172.
  20. N. Tajbakhsh, S. R. Gurudu, and J. Liang, “Automated polyp detection in colonoscopy videos using shape and context information,” IEEE Transactions on Medical Imaging, vol. 35, no. 2, pp. 630–644, 2015.
  21. P. Bergmann, K. Batzner, M. Fauser, D. Sattlegger, and C. Steger, “The mvtec anomaly detection dataset: a comprehensive real-world dataset for unsupervised anomaly detection,” International Journal of Computer Vision, vol. 129, no. 4, pp. 1038–1059, 2021.
  22. A. Conti, E. Fini, M. Mancini, P. Rota, Y. Wang, and E. Ricci, “Vocabulary-free image classification,” in Advances in Neural Information Processing Systems, vol. 36, 2024.
  23. L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan, “Learning to detect salient objects with image-level supervision,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 136–145.
  24. J. Silva, A. Histace, O. Romain, X. Dray, and B. Granado, “Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer,” International Journal of Computer Assisted Radiology and Surgery, vol. 9, pp. 283–293, 2014.
  25. W. Wang, J. Tian, C. Zhang, Y. Luo, X. Wang, and J. Li, “An improved deep learning approach and its applications on colonic polyp images detection,” BMC Medical Imaging, vol. 20, pp. 1–14, 2020.
  26. Y. Zou, J. Jeong, L. Pemula, D. Zhang, and O. Dabeer, “Spot-the-difference self-supervised pre-training for anomaly detection and segmentation,” in European Conference on Computer Vision, 2022, pp. 392–408.
  27. Y. Li, Y. Du, K. Zhou, J. Wang, W. X. Zhao, and J.-R. Wen, “Evaluating object hallucination in large vision-language models,” arXiv preprint arXiv:2305.10355, 2023.
  28. P. Xu, W. Shao, K. Zhang, P. Gao, S. Liu, M. Lei, F. Meng, S. Huang, Y. Qiao, and P. Luo, “Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models,” arXiv preprint arXiv:2306.09265, 2023.
  29. C. Cui, Y. Zhou, X. Yang, S. Wu, L. Zhang, J. Zou, and H. Yao, “Holistic analysis of hallucination in gpt-4v (ision): Bias and interference challenges,” arXiv preprint arXiv:2311.03287, 2023.
  30. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollar, and R. Girshick, “Segment anything,” in IEEE International Conference on Computer Vision, 2023, pp. 4015–4026.
  31. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  32. R. Padilla, W. L. Passos, T. L. B. Dias, S. L. Netto, and E. A. B. da Silva, “A comparative analysis of object detection metrics with a companion open-source toolkit,” Electronics, vol. 10, 2021.
  33. F. Perazzi, P. Krähenbühl, Y. Pritch, and A. Sorkine-Hornung, “Saliency filters: Contrast based filtering for salient region detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 733–740.
  34. D.-P. Fan, M.-M. Cheng, Y. Liu, T. Li, and A. Borji, “Structure-measure: A new way to evaluate foreground maps,” in IEEE International Conference on Computer Vision, 2017, pp. 4558–4567.
  35. R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1597–1604.
  36. J. Gu, Z. Han, S. Chen, A. Beirami, B. He, G. Zhang, R. Liao, Y. Qin, V. Tresp, and P. Torr, “A systematic survey of prompt engineering on vision-language foundation models,” arXiv preprint arXiv:2307.12980, 2023.
  37. C. Li, C. Wong, S. Zhang, N. Usuyama, H. Liu, J. Yang, T. Naumann, H. Poon, and J. Gao, “Llava-med: Training a large language-and-vision assistant for biomedicine in one day,” in Advances in Neural Information Processing Systems, vol. 36, 2024.
  38. Y. Zhou, C. Cui, J. Yoon, L. Zhang, Z. Deng, C. Finn, M. Bansal, and H. Yao, “Analyzing and mitigating object hallucination in large vision-language models,” arXiv preprint arXiv:2310.00754, 2023.
  39. Y. Qian, H. Zhang, Y. Yang, and Z. Gan, “How easy is it to fool your multimodal llms? an empirical analysis on deceptive prompts,” arXiv preprint arXiv:2402.13220, 2024.
  40. J. M. Kim, A. Koepke, C. Schmid, and Z. Akata, “Exposing and mitigating spurious correlations for cross-modal retrieval,” in IEEE Conference on Computer Vision and Pattern Recognition, 2023, pp. 2584–2594.
  41. Y. Wu, Y. Zhao, Z. Li, B. Qin, and K. Xiong, “Improving cross-task generalization with step-by-step instructions,” SCIENCE CHINA Information Sciences, 2023.
  42. H. Chen, K. Yuan, Y. Huang, L. Guo, Y. Wang, and J. Chen, “Feedback is all you need: from chatgpt to autonomous driving,” SCIENCE CHINA Information Sciences, vol. 66, no. 6, pp. 1–3, 2023.
  43. S. Yan, M. Bai, W. Chen, X. Zhou, Q. Huang, and L. E. Li, “Vigor: Improving visual grounding of large vision language models with fine-grained reward modeling,” arXiv preprint arXiv:2402.06118, 2024.
  44. Q. Jiao, D. Chen, Y. Huang, Y. Li, and Y. Shen, “Enhancing multimodal large language models with vision detection models: An empirical study,” arXiv preprint arXiv:2401.17981, 2024.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yao Jiang (17 papers)
  2. Xinyu Yan (5 papers)
  3. Ge-Peng Ji (29 papers)
  4. Keren Fu (22 papers)
  5. Meijun Sun (4 papers)
  6. Huan Xiong (42 papers)
  7. Deng-Ping Fan (88 papers)
  8. Fahad Shahbaz Khan (225 papers)
Citations (9)
Youtube Logo Streamline Icon: https://streamlinehq.com