Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

llm-japanese-dataset v0: Construction of Japanese Chat Dataset for Large Language Models and its Methodology (2305.12720v1)

Published 22 May 2023 in cs.CL and cs.AI

Abstract: This study constructed a Japanese chat dataset for tuning LLMs, which consist of about 8.4 million records. Recently, LLMs have been developed and gaining popularity. However, high-performing LLMs are usually mainly for English. There are two ways to support languages other than English by those LLMs: constructing LLMs from scratch or tuning existing models. However, in both ways, datasets are necessary parts. In this study, we focused on supporting Japanese in those LLMs and making a dataset for training or tuning LLMs in Japanese. The dataset we constructed consisted of various tasks, such as translation and knowledge tasks. In our experiment, we tuned an existing LLM using our dataset and evaluated the performance qualitatively. The results suggest that our dataset is possibly beneficial for LLMs. However, we also revealed some difficulties in constructing LLMs in languages other than English.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Masanori Hirano (18 papers)
  2. Masahiro Suzuki (55 papers)
  3. Hiroki Sakaji (21 papers)
Citations (6)
Github Logo Streamline Icon: https://streamlinehq.com