Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on Human Preference Learning for Large Language Models (2406.11191v2)

Published 17 Jun 2024 in cs.CL

Abstract: The recent surge of versatile LLMs largely depends on aligning increasingly capable foundation models with human intentions by preference learning, enhancing LLMs with excellent applicability and effectiveness in a wide range of contexts. Despite the numerous related studies conducted, a perspective on how human preferences are introduced into LLMs remains limited, which may prevent a deeper comprehension of the relationships between human preferences and LLMs as well as the realization of their limitations. In this survey, we review the progress in exploring human preference learning for LLMs from a preference-centered perspective, covering the sources and formats of preference feedback, the modeling and usage of preference signals, as well as the evaluation of the aligned LLMs. We first categorize the human feedback according to data sources and formats. We then summarize techniques for human preferences modeling and compare the advantages and disadvantages of different schools of models. Moreover, we present various preference usage methods sorted by the objectives to utilize human preference signals. Finally, we summarize some prevailing approaches to evaluate LLMs in terms of alignment with human intentions and discuss our outlooks on the human intention alignment for LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ruili Jiang (2 papers)
  2. Kehai Chen (59 papers)
  3. Xuefeng Bai (34 papers)
  4. Zhixuan He (2 papers)
  5. Juntao Li (89 papers)
  6. Muyun Yang (21 papers)
  7. Tiejun Zhao (70 papers)
  8. Liqiang Nie (191 papers)
  9. Min Zhang (630 papers)
Citations (3)