Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probing via Prompting (2207.01736v1)

Published 4 Jul 2022 in cs.CL

Abstract: Probing is a popular method to discern what linguistic information is contained in the representations of pre-trained LLMs. However, the mechanism of selecting the probe model has recently been subject to intense debate, as it is not clear if the probes are merely extracting information or modeling the linguistic property themselves. To address this challenge, this paper introduces a novel model-free approach to probing, by formulating probing as a prompting task. We conduct experiments on five probing tasks and show that our approach is comparable or better at extracting information than diagnostic probes while learning much less on its own. We further combine the probing via prompting approach with attention head pruning to analyze where the model stores the linguistic information in its architecture. We then examine the usefulness of a specific linguistic property for pre-training by removing the heads that are essential to that property and evaluating the resulting model's performance on LLMing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jiaoda Li (8 papers)
  2. Ryan Cotterell (226 papers)
  3. Mrinmaya Sachan (124 papers)
Citations (13)