Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Building a Large Japanese Web Corpus for Large Language Models (2404.17733v1)

Published 27 Apr 2024 in cs.CL and cs.AI

Abstract: Open Japanese LLMs have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese texts. This study builds a large Japanese web corpus by extracting and refining text from the Common Crawl archive (21 snapshots of approximately 63.4 billion pages crawled between 2020 and 2023). This corpus consists of approximately 312.1 billion characters (approximately 173 million pages), which is the largest of all available training corpora for Japanese LLMs, surpassing CC-100 (approximately 25.8 billion characters), mC4 (approximately 239.7 billion characters) and OSCAR 23.10 (approximately 74 billion characters). To confirm the quality of the corpus, we performed continual pre-training on Llama 2 7B, 13B, 70B, Mistral 7B v0.1, and Mixtral 8x7B Instruct as base LLMs and gained consistent (6.6-8.1 points) improvements on Japanese benchmark datasets. We also demonstrate that the improvement on Llama 2 13B brought from the presented corpus was the largest among those from other existing corpora.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Naoaki Okazaki (70 papers)
  2. Kakeru Hattori (5 papers)
  3. Hirai Shota (2 papers)
  4. Hiroki Iida (3 papers)
  5. Masanari Ohi (9 papers)
  6. Kazuki Fujii (14 papers)
  7. Taishi Nakamura (11 papers)
  8. Mengsay Loem (8 papers)
  9. Rio Yokota (64 papers)
  10. Sakae Mizuki (7 papers)
Citations (1)

Summary

Refining LLMs for Japanese Text: A Comprehensive Study

Enhancing Corpora Quality

The paper presents a comprehensive effort to construct a large, high-quality Japanese web corpus, derived from 21 snapshots of the Common Crawl archive spanning from 2020 to 2023. This new corpus, boasting approximately 312.1 billion characters across 173 million pages, sets the record as the most extensive training corpus available for Japanese LLMs to date.

Key Advancements in Corpus Development

  • Extraction and Refinement: The process utilized for developing the corpus involved meticulous extraction and refinement. By only including high-quality text, the paper ensured that the data feeding into AI models helps in nurturing more accurate and contextually aware systems.
  • Benchmark Improvements: When comparing pre-training performance using Llama 2 (various configurations) and Mistral models, the newly refined corpus consistently demonstrated improvements, ranging from 6.6 to 8.1 points across different Japanese benchmark tests. This highlights the corpus' potential in enhancing model performance significantly.
  • Notable Comparisons: It was particularly interesting to note that the 13B model variant of Llama 2 showed the greatest improvement when trained with this new corpus compared to alternatives, which underscores the quality and relevance of the data sourced.

The Importance of Deduplication and Cleaning

The research adhered to a series of steps ensuring the corpus' quality, including deduplication and nuanced cleaning methods:

  1. Rapid Japanese Detection: This initial filter assessed if a page likely contained Japanese text based on HTML tags or quick content checks before performing more resource-heavy processing.
  2. Quality Filtering: Various rules helped exclude low-quality pages, such as those with excessive repetition, lack of proper punctuation, or high percentage of foreign characters.
  3. Advanced Deduplication: Utilizing techniques like MinHash to remove duplicates ensured that the model is trained on unique, varied textual examples enhancing its ability to generalize from the training data.
  4. Host Filtering: To further refine the corpus, URLs known for low-quality content or not aligning with copyright and ethical standards were excluded.

Implications and Future Directions

The improvements noted in leveraging this robust, culturally nuanced corpus open up exciting pathways for AI application in Japanese, enhancing understanding and interaction capacities of models. Here are some implications and potential future developments:

  1. Cultural Nuance Understanding: With better training data that is culturally relevant, AI models can attain a deeper understanding of context, idiomatic expressions, and cultural nuances vital for applications like translation, customer service, and content creation.
  2. Ethical AI Development: By intentionally filtering out low-quality, biased, or harmful content, the research sets a precedent for the importance of ethical considerations in AI training processes.
  3. Transfer to Other Languages: The methodology for improving this Japanese corpus can be a blueprint for similar enhancements in other language datasets, particularly those with less digital footprint.
  4. Commercial and Educational Applications: Improved LLMs can be pivotal in sectors such as automation, education, accessibility technology, and beyond, where language plays a crucial role in user interaction.

In conclusion, this paper not only charts a path toward more sophisticated Japanese LLMs but also illustrates the profound impact of data quality on AI capabilities. The detailed process of refining and evaluating the corpus showcases a blueprint that could enhance LLM training approaches globally.

Youtube Logo Streamline Icon: https://streamlinehq.com