Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CSL: A Large-scale Chinese Scientific Literature Dataset (2209.05034v1)

Published 12 Sep 2022 in cs.CL

Abstract: Scientific literature serves as a high-quality corpus, supporting a lot of NLP research. However, existing datasets are centered around the English language, which restricts the development of Chinese scientific NLP. In this work, we present CSL, a large-scale Chinese Scientific Literature dataset, which contains the titles, abstracts, keywords and academic fields of 396k papers. To our knowledge, CSL is the first scientific document dataset in Chinese. The CSL can serve as a Chinese corpus. Also, this semi-structured data is a natural annotation that can constitute many supervised NLP tasks. Based on CSL, we present a benchmark to evaluate the performance of models across scientific domain tasks, i.e., summarization, keyword generation and text classification. We analyze the behavior of existing text-to-text models on the evaluation tasks and reveal the challenges for Chinese scientific NLP tasks, which provides a valuable reference for future research. Data and code are available at https://github.com/ydli-ai/CSL

Overview of "CSL: A Large-scale Chinese Scientific Literature Dataset"

This paper introduces a novel dataset known as CSL, aimed at enhancing NLP research within Chinese scientific literature. Addressing a significant gap, CSL provides a corpus that is essential for developing NLP applications in non-English contexts, particularly in Chinese. This dataset comprises metadata from 396,209 papers, which includes titles, abstracts, keywords, and academic fields, making it a comprehensive resource for various NLP tasks.

Dataset Characteristics

CSL is distinguished by its focus on the Chinese language and its extensive coverage across 67 disciplines divided into 13 first-level categories. Unlike existing databases that predominantly cater to the English language, CSL leverages Chinese academic journals that have undergone peer review, ensuring high data reliability. The dataset directly accesses the database to maintain accuracy in metadata representation.

NLP Task Derivation and Benchmarking

The metadata inherent in CSL enables the creation of multiple NLP tasks such as text summarization, keyword generation, and text classification. The authors construct a benchmark from these tasks to evaluate model performance, facilitating advancements in NLP for Chinese scientific contexts. Specifically, they explore summarization of abstracts to titles, keyword extraction, and academic categorization.

Methodology and Evaluation

The paper utilizes cutting-edge text-to-text models, including T5, PEGASUS, and BART, to establish baselines. The authors perform multi-task learning by unifying these tasks into text generation formats and fine-tune the models on CSL-specific tasks. Providing evidence of the dataset's value, results from pre-trained CSL-T5 show improvements over general-domain models, affirming the effectiveness of domain-adaptive training.

Experimental Outcomes

Empirical results suggest that while existing models achieve modest success in task performance, there remains substantial room for improvement. Particularly, the tailored CSL-T5 model demonstrates superior performance, highlighting the benefits of domain-specific training. The paper also underscores the potential for CSL to serve as a foundational resource for cross-task and few-shot learning research, given its versatile task construction capabilities.

Implications and Future Directions

The introduction of CSL sets a critical precedent for expanding research in non-English NLP, significantly enriching the resources available for Chinese NLP research. By providing a platform to develop and evaluate models across diverse scientific disciplines, CSL facilitates specialized research previously constrained by resource limitations.

Anticipated future developments involve extending the dataset to include multi-label annotations and exploring its application in few-shot learning scenarios. Additionally, the potential for CSL to contribute to broader cross-linguistic studies and comparisons is noteworthy.

In conclusion, CSL represents a significant contribution to the NLP field, especially for those focusing on non-English resources. Its comprehensive coverage and high-quality data pave the way for progress in Chinese scientific literature processing, influencing both theoretical and practical advancements in AI-driven language technology.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yudong Li (19 papers)
  2. Yuqing Zhang (53 papers)
  3. Zhe Zhao (97 papers)
  4. Linlin Shen (133 papers)
  5. Weijie Liu (33 papers)
  6. Weiquan Mao (7 papers)
  7. Hui Zhang (405 papers)
Citations (45)