Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-Instruction-Free LLM Self-Alignment with Limited Samples (2401.06785v1)

Published 6 Jan 2024 in cs.CL and cs.AI

Abstract: Aligning LLMs with human values is a vital task for LLM practitioners. Current alignment techniques have several limitations: (1) requiring a large amount of annotated data; (2) demanding heavy human involvement; (3) lacking a systematic mechanism to continuously improve. In this work, we study aligning LLMs to a new domain with limited samples (e.g. < 100). We propose an algorithm that can self-align LLMs iteratively without active human involvement. Unlike existing works, our algorithm relies on neither human-crafted instructions nor labeled rewards, significantly reducing human involvement. In addition, our algorithm can self-improve the alignment continuously. The key idea is to first retrieve high-quality samples related to the target domain and use them as In-context Learning examples to generate more samples. Then we use the self-generated samples to finetune the LLM iteratively. We show that our method can unlock the LLMs' self-generalization ability to perform alignment with near-zero human supervision. We test our algorithm on three benchmarks in safety, truthfulness, and instruction-following, and show good performance in alignment, domain adaptability, and scalability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hongyi Guo (14 papers)
  2. Yuanshun Yao (28 papers)
  3. Wei Shen (181 papers)
  4. Jiaheng Wei (30 papers)
  5. Xiaoying Zhang (32 papers)
  6. Zhaoran Wang (164 papers)
  7. Yang Liu (2253 papers)
Citations (18)