Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models (2311.08472v1)

Published 14 Nov 2023 in cs.CL

Abstract: Recently, work in NLP has shifted to few-shot (in-context) learning, with LLMs performing well across a range of tasks. However, while fairness evaluations have become a standard for supervised methods, little is known about the fairness of LLMs as prediction systems. Further, common standard methods for fairness involve access to models weights or are applied during finetuning, which are not applicable in few-shot learning. Do LLMs exhibit prediction biases when used for standard NLP tasks? In this work, we explore the effect of shots, which directly affect the performance of models, on the fairness of LLMs as NLP classification systems. We consider how different shot selection strategies, both existing and new demographically sensitive methods, affect model fairness across three standard fairness datasets. We discuss how future work can include LLM fairness evaluations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Carlos Aguirre (11 papers)
  2. Kuleen Sasse (7 papers)
  3. Isabel Cachola (9 papers)
  4. Mark Dredze (66 papers)