Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning (2212.08061v2)

Published 15 Dec 2022 in cs.CL

Abstract: Generating a Chain of Thought (CoT) has been shown to consistently improve LLM performance on a wide range of NLP tasks. However, prior work has mainly focused on logical reasoning tasks (e.g. arithmetic, commonsense QA); it remains unclear whether improvements hold for more diverse types of reasoning, especially in socially situated contexts. Concretely, we perform a controlled evaluation of zero-shot CoT across two socially sensitive domains: harmful questions and stereotype benchmarks. We find that zero-shot CoT reasoning in sensitive domains significantly increases a model's likelihood to produce harmful or undesirable output, with trends holding across different prompt formats and model variants. Furthermore, we show that harmful CoTs increase with model size, but decrease with improved instruction following. Our work suggests that zero-shot CoT should be used with caution on socially important tasks, especially when marginalized groups or sensitive topics are involved.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Omar Shaikh (23 papers)
  2. Hongxin Zhang (47 papers)
  3. William Held (17 papers)
  4. Michael Bernstein (23 papers)
  5. Diyi Yang (151 papers)
Citations (151)