Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLM-augmented Preference Learning from Natural Language (2310.08523v1)

Published 12 Oct 2023 in cs.CL

Abstract: Finding preferences expressed in natural language is an important but challenging task. State-of-the-art(SotA) methods leverage transformer-based models such as BERT, RoBERTa, etc. and graph neural architectures such as graph attention networks. Since LLMs are equipped to deal with larger context lengths and have much larger model sizes than the transformer-based model, we investigate their ability to classify comparative text directly. This work aims to serve as a first step towards using LLMs for the CPC task. We design and conduct a set of experiments that format the classification task into an input prompt for the LLM and a methodology to get a fixed-format response that can be automatically evaluated. Comparing performances with existing methods, we see that pre-trained LLMs are able to outperform the previous SotA models with no fine-tuning involved. Our results show that the LLMs can consistently outperform the SotA when the target text is large -- i.e. composed of multiple sentences --, and are still comparable to the SotA performance in shorter text. We also find that few-shot learning yields better performance than zero-shot learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Inwon Kang (7 papers)
  2. Sikai Ruan (3 papers)
  3. Tyler Ho (1 paper)
  4. Jui-Chien Lin (1 paper)
  5. Farhad Mohsin (2 papers)
  6. Oshani Seneviratne (38 papers)
  7. Lirong Xia (78 papers)
Citations (1)