The paper "Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs" examines vulnerabilities in LLMs concerning their alignment with human ethical standards. The authors address a significant security concern: the potential extraction of harmful or unwanted content from LLMs even when they are designed to reject such requests.
The core finding of this paper is the introduction of a method referred to as "model interrogation." Unlike traditional jail-breaking techniques that typically involve crafting specific prompts to manipulate the model's responses, this method leverages access to the model's output logits. In scenarios where the LLM initially refuses a toxic request, a potentially harmful response is often present but obscured within these logits. The interrogation process involves strategically selecting lower-ranked tokens in specific parts of the auto-regressive text generation, which coerces the model into producing the concealed, unwelcome outputs.
The authors report that this interrogation technique is significantly more effective and efficient than jail-breaking methods, achieving a 92% success rate in extracting toxic content compared to the 62% success rate of traditional methods. Additionally, it is noted to be 10 to 20 times faster. The resulting harmful content is highlighted as being more relevant and coherent, enhancing the threat posed to LLM alignment.
Moreover, the paper indicates that this method not only stands alone in its effectiveness but can also be combined with existing jail-breaking approaches to further improve the extraction performance. An intriguing insight is that even LLMs specifically developed for coding applications are not immune to this type of coercive extraction.
Overall, this paper underscores a critical vulnerability in LLMs that necessitates attention, especially for applications relying on the ethical alignment of these models. The findings advocate for improved security measures and better handling of LLM output to mitigate the risks associated with unauthorized content extraction.