Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Identifying Multi-modal Knowledge Neurons in Pretrained Transformers via Two-stage Filtering (2503.22941v1)

Published 29 Mar 2025 in cs.AI, cs.LG, and cs.MM

Abstract: Recent advances in LLMs have led to the development of multimodal LLMs (MLLMs) in the fields of NLP and computer vision. Although these models allow for integrated visual and language understanding, they present challenges such as opaque internal processing and the generation of hallucinations and misinformation. Therefore, there is a need for a method to clarify the location of knowledge in MLLMs. In this study, we propose a method to identify neurons associated with specific knowledge using MiniGPT-4, a Transformer-based MLLM. Specifically, we extract knowledge neurons through two stages: activation differences filtering using inpainting and gradient-based filtering using GradCAM. Experiments on the image caption generation task using the MS COCO 2017 dataset, BLEU, ROUGE, and BERTScore quantitative evaluation, and qualitative evaluation using an activation heatmap showed that our method is able to locate knowledge with higher accuracy than existing methods. This study contributes to the visualization and explainability of knowledge in MLLMs and shows the potential for future knowledge editing and control.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yugen Sato (2 papers)
  2. Tomohiro Takagi (8 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets