Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Finding and Editing Multi-Modal Neurons in Pre-Trained Transformers (2311.07470v2)

Published 13 Nov 2023 in cs.CL

Abstract: Understanding the internal mechanisms by which multi-modal LLMs interpret different modalities and integrate cross-modal representations is becoming increasingly critical for continuous improvements in both academia and industry. In this paper, we propose a novel method to identify key neurons for interpretability -- how multi-modal LLMs bridge visual and textual concepts for captioning. Our method improves conventional works upon efficiency and applied range by removing needs of costly gradient computation. Based on those identified neurons, we further design a multi-modal knowledge editing method, beneficial to mitigate sensitive words or hallucination. For rationale of our design, we provide theoretical assumption. For empirical evaluation, we have conducted extensive quantitative and qualitative experiments. The results not only validate the effectiveness of our methods, but also offer insightful findings that highlight three key properties of multi-modal neurons: sensitivity, specificity and causal-effect, to shed light for future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Haowen Pan (2 papers)
  2. Yixin Cao (138 papers)
  3. Xiaozhi Wang (51 papers)
  4. Xun Yang (76 papers)
  5. Meng Wang (1063 papers)
Citations (17)
X Twitter Logo Streamline Icon: https://streamlinehq.com