Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unified Generative and Discriminative Training for Multi-modal Large Language Models (2411.00304v1)

Published 1 Nov 2024 in cs.CV and cs.MM
Unified Generative and Discriminative Training for Multi-modal Large Language Models

Abstract: In recent times, Vision-LLMs (VLMs) have been trained under two predominant paradigms. Generative training has enabled Multimodal LLMs (MLLMs) to tackle various complex tasks, yet issues such as hallucinations and weak object discrimination persist. Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval, yet struggles with complex scenarios requiring fine-grained semantic differentiation. This paper addresses these challenges by proposing a unified approach that integrates the strengths of both paradigms. Considering interleaved image-text sequences as the general format of input samples, we introduce a structure-induced training strategy that imposes semantic relationships between input samples and the MLLM's hidden state. This approach enhances the MLLM's ability to capture global semantics and distinguish fine-grained semantics. By leveraging dynamic sequence alignment within the Dynamic Time Warping framework and integrating a novel kernel for fine-grained semantic differentiation, our method effectively balances generative and discriminative tasks. Extensive experiments demonstrate the effectiveness of our approach, achieving state-of-the-art results in multiple generative tasks, especially those requiring cognitive and discrimination abilities. Additionally, our method surpasses discriminative benchmarks in interleaved and fine-grained retrieval tasks. By employing a retrieval-augmented generation strategy, our approach further enhances performance in some generative tasks within one model, offering a promising direction for future research in vision-LLMing.

Unified Generative and Discriminative Training for Multi-modal LLMs

The paper "Unified Generative and Discriminative Training for Multi-modal LLMs" proposes a hybrid approach to improve the functionality of Vision-LLMs (VLMs) by synthesizing generative and discriminative training strategies. VLMs traditionally adopt either a generative or a discriminative paradigm. Generative models, often typified by Multimodal LLMs (MLLMs), excel in handling complex tasks such as visual question answering and image captioning but suffer from issues like hallucinations and weaker object discrimination. Discriminative models, exemplified by CLIP, display robust capabilities in zero-shot classification and retrieval yet falter in detailed semantic differentiation.

This research attempts to bridge the efficacy gap between these two paradigms by integrating their strengths. Through the lens of interleaved image-text sequences, the authors propose a structure-induced training strategy intended to enhance an MLLM's semantic grasp and discrimination capabilities. By utilizing a dynamic sequence alignment framework, structured as Dynamic Time Warping, along with a novel kernel, the paper claims notable improvements in parsing interleaved and fine-grained multimodal content.

The technique theoretically and empirically supports the proposed dual capability by highlighting state-of-the-art performances across generative and discriminative benchmarks. Extensive experiments reveal notable successes in complex multimodal generative tasks and nuanced retrieval tasks, evidencing the practical applicability of the unified model's cognitive capabilities.

The research further navigates the benefits of retrieval-augmented generation within the MLLM framework, circumventing the need for specialty retrieval modules and optimizing performance across generative tasks. This suggests a cohesive path forward for vision-LLMing that harmonizes generation and discrimination within a singular framework.

Future research based on this work might explore further refinements in balancing generative and discriminative paradigms, integration of improved sequence alignment techniques, and optimized kernel functions for enhanced semantic modeling. Given the documented improvements and capacity to offset previous limitations, this hybrid generative-discriminative strategy presents a promising avenue for future enhancements in VLMs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (104)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  2. Cm3: A causal masked multimodal model of the internet. arXiv preprint arXiv:2201.07520, 2022.
  3. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716–23736, 2022.
  4. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023.
  5. Meissonic: Revitalizing masked generative transformers for efficient high-resolution text-to-image synthesis. arXiv preprint arXiv:2410.08261, 2024.
  6. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint, 2023.
  7. Hallucination of multimodal large language models: A survey. arXiv preprint arXiv:2404.18930, 2024.
  8. Leveraging large language models for multimodal search. arXiv preprint arXiv:2404.15790, 2024.
  9. Coyo-700m: Image-text pair dataset. Coyo-700m: Image-text pair dataset, 2022.
  10. WebQA: Multihop and Multimodal QA. 2021.
  11. Minigpt-v2: Large language model as a unified interface for vision-language multi-task learning. arXiv:2310.09478, 2023.
  12. Shikra: Unleashing multimodal llm’s referential dialogue magic. arXiv preprint arXiv:2306.15195, 2023.
  13. Zero-shot visual question answering using knowledge graph. In The Semantic Web–ISWC 2021: 20th International Semantic Web Conference, ISWC 2021, Virtual Event, October 24–28, 2021, Proceedings 20, pages 146–162. Springer, 2021.
  14. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
  15. Dwell in the beginning: How language models embed long documents for dense retrieval. arXiv preprint arXiv:2404.04163, 2024.
  16. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364, 2017.
  17. Soft-dtw: a differentiable loss function for time-series. In International conference on machine learning, pages 894–903. PMLR, 2017.
  18. A kernel for time series based on global alignments. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, volume 2, pages II–413. IEEE, 2007.
  19. Instructblip: Towards general-purpose vision-language models with instruction tuning. Advances in Neural Information Processing Systems, 36, 2024.
  20. Data filtering networks. arXiv preprint arXiv:2309.17425, 2023.
  21. Video-of-thought: Step-by-step video reasoning from perception to cognition. In Proceedings of the International Conference on Machine Learning, 2024.
  22. Vitron: A unified pixel-level vision llm for understanding, generating, segmenting, editing. 2024.
  23. Enhancing video-language representations with structural spatio-temporal alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
  24. De-fine: Decomposing and refining visual programs with auto-feedback. arXiv preprint arXiv:2311.12890, 2023.
  25. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997, 2023.
  26. Worldgpt: Empowering llm as multimodal world model. arXiv preprint arXiv:2404.18202, 2024.
  27. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913, 2017.
  28. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3608–3617, 2018.
  29. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929–3938. PMLR, 2020.
  30. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
  31. Unit: Multimodal multitask learning with a unified transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1439–1449, 2021.
  32. Visual storytelling. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies, pages 1233–1239, 2016.
  33. A global alignment kernel based approach for group-level happiness intensity estimation. arXiv preprint arXiv:1809.03313, 2018.
  34. One graph model for cross-domain dynamic link prediction. arXiv preprint arXiv:2402.02168, 2024.
  35. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700–6709, 2019.
  36. Are binary annotations sufficient? video moment retrieval via hierarchical uncertainty-based active learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 23013–23022, 2023.
  37. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535–547, 2019.
  38. Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128–3137, 2015.
  39. Vilt: Vision-and-language transformer without convolution or region supervision. In International conference on machine learning, pages 5583–5594. PMLR, 2021.
  40. Grounding language models to images for multimodal generation. arXiv preprint arXiv:2301.13823, 2, 2023.
  41. Lisa: Reasoning segmentation via large language model. arXiv preprint arXiv:2308.00692, 2023.
  42. Introducing idefics: An open reproduction of state-of-the-art visual language model, 2023. URL https://huggingface. co/blog/idefics. Accessed, pages 09–18, 2023.
  43. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning, 2012.
  44. Making large language models a better foundation for dense retrieval. arXiv preprint arXiv:2312.15503, 2023.
  45. Gradient-regulated meta-prompt learning for generalizable vision-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2551–2562, 2023.
  46. Fine-grained semantically aligned vision-language pre-training. Advances in neural information processing systems, 35:7290–7303, 2022.
  47. Fine-tuning multimodal llms to follow zero-shot demonstrative instructions. In The Twelfth International Conference on Learning Representations, 2023.
  48. Variational cross-graph reasoning and adaptive structured semantics learning for compositional temporal grounding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(10):12601–12617, 2023.
  49. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023.
  50. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694–9705, 2021.
  51. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023.
  52. Vila: On pre-training for visual language models, 2023.
  53. Fine-grained late-interaction multi-modal retrieval for retrieval augmented visual question answering. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  54. Preflmr: Scaling up fine-grained late-interaction multi-modal retrievers. arXiv preprint, (arXiv:2402.08327), 2024.
  55. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744, 2023.
  56. Visual instruction tuning. Advances in neural information processing systems, 36, 2024.
  57. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157–173, 2024.
  58. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023.
  59. Rar: Retrieving and ranking augmented mllms for visual recognition. arXiv preprint arXiv:2403.13805, 2024.
  60. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
  61. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022.
  62. How does the textual information affect the retrieval of multimodal in-context learning? arXiv preprint arXiv:2404.12866, 2024.
  63. Fine-tuning llama for multi-stage text retrieval. arXiv preprint arXiv:2310.08319, 2023.
  64. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013.
  65. Niklas Muennighoff. Sgpt: Gpt sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904, 2022.
  66. Generative representational instruction tuning. arXiv preprint arXiv:2402.09906, 2024.
  67. Meinard Müller. Dynamic time warping. Information retrieval for music and motion, pages 69–84, 2007.
  68. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023.
  69. Towards unified multimodal editing with enhanced knowledge collaboration. arXiv preprint arXiv:2409.19872, 2024.
  70. Controlretriever: Harnessing the power of instructions for controllable retrieval. arXiv preprint arXiv:2308.10025, 2023.
  71. I3: I ntent-i ntrospective retrieval conditioned on i nstructions. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1839–1849, 2024.
  72. Auto-encoding morph-tokens for multimodal llm. arXiv preprint arXiv:2405.01926, 2024.
  73. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  74. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
  75. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652, 2023.
  76. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317–8326, 2019.
  77. Quill: Query intent with large language models using retrieval augmentation and multi-stage distillation. arXiv preprint arXiv:2210.15718, 2022.
  78. Eva-clip: Improved training techniques for clip at scale. arXiv preprint arXiv:2303.15389, 2023.
  79. Winoground: Probing vision and language models for visio-linguistic compositionality. In CVPR, 2022.
  80. Mm-interleaved: Interleaved image-text generative modeling via multi-modal feature synchronizer. arXiv preprint arXiv:2401.10208, 2024.
  81. Eyes wide shut? exploring the visual shortcomings of multimodal llms. arXiv preprint arXiv:2401.06209, 2024.
  82. What makes for good visual tokenizers for large language models? arXiv preprint arXiv:2305.12223, 2023.
  83. Fvqa: Fact-based visual question answering. IEEE transactions on pattern analysis and machine intelligence, 40(10):2413–2427, 2017.
  84. Uniir: Training and benchmarking universal multimodal information retrievers. arXiv preprint arXiv:2311.17136, 2023.
  85. Towards semantic equivalence of tokenization in multimodal llm. arXiv preprint arXiv:2406.05127, 2024.
  86. Next-gpt: Any-to-any multimodal llm. In Proceedings of the International Conference on Machine Learning, 2024.
  87. C-pack: Packaged resources to advance general chinese embedding, 2023.
  88. Demystifying clip data. arXiv preprint arXiv:2309.16671, 2023.
  89. Ig captioner: Information gain captioners are strong zero-shot classifiers. arXiv preprint arXiv:2311.17072, 2023.
  90. Inference with reference: Lossless acceleration of large language models. arXiv preprint arXiv:2304.04487, 2023.
  91. An improved baseline for reasoning segmentation with large language model. arXiv preprint arXiv:2312.17240, 2023.
  92. Re-vilm: Retrieval-augmented visual language model for zero and few-shot image captioning. arXiv preprint arXiv:2302.04858, 2023.
  93. Retrieval-augmented multimodal language modeling. arXiv preprint arXiv:2211.12561, 2022.
  94. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023.
  95. A survey on multimodal large language models. arXiv preprint arXiv:2306.13549, 2023.
  96. Scaling autoregressive multi-modal models: Pretraining and instruction tuning. arXiv preprint arXiv:2309.02591, 2(3), 2023.
  97. Hallucidoctor: Mitigating hallucinatory toxicity in visual instruction data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
  98. Visually-prompted language model for fine-grained scene graph generation in an open world. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21560–21571, 2023.
  99. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023.
  100. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11975–11986, 2023.
  101. Boss: Bottom-up cross-modal semantic composition with hybrid counterfactual training for robust content-based image retrieval. arXiv preprint arXiv:2207.04211, 2022.
  102. Hyperllava: Dynamic visual and language expert tuning for multimodal large language models. arXiv preprint arXiv:2403.13447, 2024.
  103. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
  104. Multimodal c4: An open, billion-scale corpus of images interleaved with text. Advances in Neural Information Processing Systems, 36, 2024.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Wei Chow (11 papers)
  2. Juncheng Li (121 papers)
  3. Qifan Yu (14 papers)
  4. Kaihang Pan (17 papers)
  5. Hao Fei (105 papers)
  6. Zhiqi Ge (5 papers)
  7. Shuai Yang (140 papers)
  8. Siliang Tang (116 papers)
  9. Hanwang Zhang (161 papers)
  10. Qianru Sun (65 papers)
Citations (2)