Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Far Can In-Context Alignment Go? Exploring the State of In-Context Alignment (2406.11474v1)

Published 17 Jun 2024 in cs.CL and cs.AI

Abstract: Recent studies have demonstrated that In-Context Learning (ICL), through the use of specific demonstrations, can align LLMs with human preferences known as In-Context Alignment (ICA), indicating that models can comprehend human instructions without requiring parameter adjustments. However, the exploration of the mechanism and applicability of ICA remains limited. In this paper, we begin by dividing the context text used in ICA into three categories: format, system prompt, and example. Through ablation experiments, we investigate the effectiveness of each part in enabling ICA to function effectively. We then examine how variants in these parts impact the model's alignment performance. Our findings indicate that the example part is crucial for enhancing the model's alignment capabilities, with changes in examples significantly affecting alignment performance. We also conduct a comprehensive evaluation of ICA's zero-shot capabilities in various alignment tasks. The results indicate that compared to parameter fine-tuning methods, ICA demonstrates superior performance in knowledge-based tasks and tool-use tasks. However, it still exhibits certain limitations in areas such as multi-turn dialogues and instruction following.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Heyan Huang (107 papers)
  2. Yinghao Li (27 papers)
  3. Huashan Sun (7 papers)
  4. Yu Bai (136 papers)
  5. Yang Gao (761 papers)
Citations (1)