Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Importance of Prompt Tuning for Automated Neuron Explanations (2310.06200v2)

Published 9 Oct 2023 in cs.CL and cs.LG

Abstract: Recent advances have greatly increased the capabilities of LLMs, but our understanding of the models and their safety has not progressed as fast. In this paper we aim to understand LLMs deeper by studying their individual neurons. We build upon previous work showing LLMs such as GPT-4 can be useful in explaining what each neuron in a LLM does. Specifically, we analyze the effect of the prompt used to generate explanations and show that reformatting the explanation prompt in a more natural way can significantly improve neuron explanation quality and greatly reduce computational cost. We demonstrate the effects of our new prompts in three different ways, incorporating both automated and human evaluations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Network dissection: Quantifying interpretability of deep visual representations. In Computer Vision and Pattern Recognition, 2017.
  2. Language models can explain neurons in language models. https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html, 2023.
  3. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
  4. A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021. https://transformer-circuits.pub/2021/framework/index.html.
  5. Natural language descriptions of deep visual features. In International Conference on Learning Representations, 2022.
  6. On the educational impact of chatgpt: Is artificial intelligence ready to obtain a university degree? In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1, pages 47–53, 2023.
  7. CLIP-dissect: Automatic description of neuron representations in deep vision networks. In The Eleventh International Conference on Learning Representations, 2023.
  8. Zoom in: An introduction to circuits. Distill, 2020. https://distill.pub/2020/circuits/zoom-in.
  9. In-context learning and induction heads. Transformer Circuits Thread, 2022. https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html.
  10. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  11. Language models are unsupervised multitask learners. OpenAI blog, 2019.
  12. Neuron-level interpretation of deep nlp models: A survey, 2022.
  13. Large language models encode clinical knowledge. Nature, pages 1–9, 2023.
  14. Towards expert-level medical question answering with large language models. arXiv preprint arXiv:2305.09617, 2023.
  15. Towards applying powerful large ai models in classroom teaching: Opportunities, challenges and prospects. arXiv preprint arXiv:2305.03433, 2023.
  16. Attention is all you need, 2023.
  17. Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564, 2023.
  18. Fingpt: Open-source financial large language models. arXiv preprint arXiv:2306.06031, 2023.
  19. Legal prompting: Teaching a language model to think like a lawyer. arXiv preprint arXiv:2212.01326, 2022.
Citations (4)

Summary

We haven't generated a summary for this paper yet.