Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models (2305.17311v1)

Published 27 May 2023 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs have been shown to exhibit positive scaling, where performance improves as models are scaled up in terms of size, compute, or data. In this work, we introduce NeQA, a dataset consisting of questions with negation in which LLMs do not exhibit straightforward positive scaling. We show that this task can exhibit inverse scaling, U-shaped scaling, or positive scaling, and the three scaling trends shift in this order as we use more powerful prompting methods or model families. We hypothesize that solving NeQA depends on two subtasks: question answering (task 1) and negation understanding (task 2). We find that task 1 has linear scaling, while task 2 has sigmoid-shaped scaling with an emergent transition point, and composing these two scaling trends yields the final scaling trend of NeQA. Our work reveals and provides a way to analyze the complex scaling trends of LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuhui Zhang (52 papers)
  2. Michihiro Yasunaga (48 papers)
  3. Zhengping Zhou (6 papers)
  4. Jeff Z. HaoChen (12 papers)
  5. James Zou (232 papers)
  6. Percy Liang (239 papers)
  7. Serena Yeung (39 papers)
Citations (5)