Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ThoughtSource: A central hub for large language model reasoning data (2301.11596v5)

Published 27 Jan 2023 in cs.CL and cs.AI

Abstract: LLMs such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to 'hallucinate' facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Simon Ott (12 papers)
  2. Konstantin Hebenstreit (4 papers)
  3. Valentin LiƩvin (8 papers)
  4. Christoffer Egeberg Hother (2 papers)
  5. Milad Moradi (23 papers)
  6. Maximilian Mayrhauser (1 paper)
  7. Robert Praas (3 papers)
  8. Ole Winther (66 papers)
  9. Matthias Samwald (36 papers)
Citations (35)