Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reducing Non-Normative Text Generation from Language Models (2001.08764v2)

Published 23 Jan 2020 in cs.CL

Abstract: Large-scale, transformer-based LLMs such as GPT-2 are pretrained on diverse corpora scraped from the internet. Consequently, they are prone to generating non-normative text (i.e. in violation of social norms). We introduce a technique for fine-tuning GPT-2, using a policy gradient reinforcement learning technique and a normative text classifier to produce reward and punishment values. We evaluate our technique on five data sets using automated and human participant experiments. The normative text classifier is 81-90% accurate when compared to gold-standard human judgments of normative and non-normative generated text. Our normative fine-tuning technique is able to reduce non-normative text by 27-61%, depending on the data set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiangyu Peng (33 papers)
  2. Siyan Li (15 papers)
  3. Spencer Frazier (11 papers)
  4. Mark Riedl (51 papers)
Citations (8)