Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NLPBench: Evaluating Large Language Models on Solving NLP Problems (2309.15630v4)

Published 27 Sep 2023 in cs.CL

Abstract: Recent developments in LLMs have shown promise in enhancing the capabilities of NLP. Despite these successes, there remains a dearth of research dedicated to the NLP problem-solving abilities of LLMs. To fill the gap in this area, we present a unique benchmarking dataset, NLPBench, comprising 378 college-level NLP questions spanning various NLP topics sourced from Yale University's prior final exams. NLPBench includes questions with context, in which multiple sub-questions share the same public information, and diverse question types, including multiple choice, short answer, and math. Our evaluation, centered on LLMs such as GPT-3.5/4, PaLM-2, and LLAMA-2, incorporates advanced prompting strategies like the chain-of-thought (CoT) and tree-of-thought (ToT). Our study reveals that the effectiveness of the advanced prompting strategies can be inconsistent, occasionally damaging LLM performance, especially in smaller models like the LLAMA-2 (13b). Furthermore, our manual assessment illuminated specific shortcomings in LLMs' scientific problem-solving skills, with weaknesses in logical decomposition and reasoning notably affecting results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Linxin Song (18 papers)
  2. Jieyu Zhang (63 papers)
  3. Lechao Cheng (66 papers)
  4. Pengyuan Zhou (46 papers)
  5. Tianyi Zhou (172 papers)
  6. Irene Li (47 papers)
Citations (9)
Github Logo Streamline Icon: https://streamlinehq.com