Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KoBigBird-large: Transformation of Transformer for Korean Language Understanding (2309.10339v1)

Published 19 Sep 2023 in cs.CL

Abstract: This work presents KoBigBird-large, a large size of Korean BigBird that achieves state-of-the-art performance and allows long sequence processing for Korean language understanding. Without further pretraining, we only transform the architecture and extend the positional encoding with our proposed Tapered Absolute Positional Encoding Representations (TAPER). In experiments, KoBigBird-large shows state-of-the-art overall performance on Korean language understanding benchmarks and the best performance on document classification and question answering tasks for longer sequences against the competitive baseline models. We publicly release our model here.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kisu Yang (7 papers)
  2. Yoonna Jang (9 papers)
  3. Taewoo Lee (21 papers)
  4. Jinwoo Seong (1 paper)
  5. Hyungjin Lee (1 paper)
  6. Hwanseok Jang (1 paper)
  7. Heuiseok Lim (49 papers)