Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KOBEST: Korean Balanced Evaluation of Significant Tasks (2204.04541v1)

Published 9 Apr 2022 in cs.CL

Abstract: A well-formulated benchmark plays a critical role in spurring advancements in the NLP field, as it allows objective and precise evaluation of diverse models. As modern LLMs (LMs) have become more elaborate and sophisticated, more difficult benchmarks that require linguistic knowledge and reasoning have been proposed. However, most of these benchmarks only support English, and great effort is necessary to construct benchmarks for other low resource languages. To this end, we propose a new benchmark named Korean balanced evaluation of significant tasks (KoBEST), which consists of five Korean-language downstream tasks. Professional Korean linguists designed the tasks that require advanced Korean linguistic knowledge. Moreover, our data is purely annotated by humans and thoroughly reviewed to guarantee high data quality. We also provide baseline models and human performance results. Our dataset is available on the Huggingface.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dohyeong Kim (62 papers)
  2. Myeongjun Jang (9 papers)
  3. Deuk Sin Kwon (3 papers)
  4. Eric Davis (6 papers)
Citations (16)