Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PaCE: Parsimonious Concept Engineering for Large Language Models (2406.04331v2)

Published 6 Jun 2024 in cs.CL, cs.AI, cs.IR, and cs.LG

Abstract: LLMs are being used for a wide variety of tasks. While they are capable of generating human-like responses, they can also produce undesirable output including potentially harmful information, racist or sexist language, and hallucinations. Alignment methods are designed to reduce such undesirable outputs via techniques such as fine-tuning, prompt engineering, and representation engineering. However, existing methods face several challenges: some require costly fine-tuning for every alignment task; some do not adequately remove undesirable concepts, failing alignment; some remove benign concepts, lowering the linguistic capabilities of LLMs. To address these issues, we propose Parsimonious Concept Engineering (PaCE), a novel activation engineering framework for alignment. First, to sufficiently model the concepts, we construct a large-scale concept dictionary in the activation space, in which each atom corresponds to a semantic concept. Given any alignment task, we instruct a concept partitioner to efficiently annotate the concepts as benign or undesirable. Then, at inference time, we decompose the LLM activations along the concept dictionary via sparse coding, to accurately represent the activations as linear combinations of benign and undesirable components. By removing the latter ones from the activations, we reorient the behavior of the LLM towards the alignment goal. We conduct experiments on tasks such as response detoxification, faithfulness enhancement, and sentiment revising, and show that PaCE achieves state-of-the-art alignment performance while maintaining linguistic capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jinqi Luo (13 papers)
  2. Tianjiao Ding (9 papers)
  3. Kwan Ho Ryan Chan (15 papers)
  4. Darshan Thaker (7 papers)
  5. Aditya Chattopadhyay (8 papers)
  6. Chris Callison-Burch (102 papers)
  7. René Vidal (154 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com