Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation (2304.07854v1)

Published 16 Apr 2023 in cs.CL

Abstract: Recently, significant public efforts have been directed towards developing low-cost models with capabilities akin to ChatGPT, thereby fostering the growth of open-source conversational models. However, there remains a scarcity of comprehensive and in-depth evaluations of these models' performance. In this study, we examine the influence of training data factors, including quantity, quality, and linguistic distribution, on model performance. Our analysis is grounded in several publicly accessible, high-quality instruction datasets, as well as our own Chinese multi-turn conversations. We assess various models using a evaluation set of 1,000 samples, encompassing nine real-world scenarios. Our goal is to supplement manual evaluations with quantitative analyses, offering valuable insights for the continued advancement of open-source chat models. Furthermore, to enhance the performance and training and inference efficiency of models in the Chinese domain, we extend the vocabulary of LLaMA - the model with the closest open-source performance to proprietary LLMs like GPT-3 - and conduct secondary pre-training on 3.4B Chinese words. We make our model, data, as well as code publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yunjie Ji (14 papers)
  2. Yan Gong (118 papers)
  3. Yong Deng (93 papers)
  4. Yiping Peng (13 papers)
  5. Qiang Niu (11 papers)
  6. Baochang Ma (8 papers)
  7. Xiangang Li (46 papers)
Citations (21)