Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MultiPragEval: Multilingual Pragmatic Evaluation of Large Language Models (2406.07736v3)

Published 11 Jun 2024 in cs.CL

Abstract: As the capabilities of LLMs expand, it becomes increasingly important to evaluate them beyond basic knowledge assessment, focusing on higher-level language understanding. This study introduces MultiPragEval, the first multilingual pragmatic evaluation of LLMs, designed for English, German, Korean, and Chinese. Comprising 1200 question units categorized according to Grice's Cooperative Principle and its four conversational maxims, MultiPragEval enables an in-depth assessment of LLMs' contextual awareness and their ability to infer implied meanings. Our findings demonstrate that Claude3-Opus significantly outperforms other models in all tested languages, establishing a state-of-the-art in the field. Among open-source models, Solar-10.7B and Qwen1.5-14B emerge as strong competitors. By analyzing pragmatic inference, we provide valuable insights into the capabilities essential for advanced language comprehension in AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Dojun Park (8 papers)
  2. Jiwoo Lee (12 papers)
  3. Seohyun Park (6 papers)
  4. Hyeyun Jeong (2 papers)
  5. Youngeun Koo (2 papers)
  6. Soonha Hwang (1 paper)
  7. Seonwoo Park (1 paper)
  8. Sungeun Lee (3 papers)