Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TongGu: Mastering Classical Chinese Understanding with Knowledge-Grounded Large Language Models (2407.03937v2)

Published 4 Jul 2024 in cs.CL

Abstract: Classical Chinese is a gateway to the rich heritage and wisdom of ancient China, yet its complexities pose formidable comprehension barriers for most modern people without specialized knowledge. While LLMs have shown remarkable capabilities in NLP, they struggle with Classical Chinese Understanding (CCU), especially in data-demanding and knowledge-intensive tasks. In response to this dilemma, we propose \textbf{TongGu} (mean understanding ancient and modern), the first CCU-specific LLM, underpinned by three core contributions. First, we construct a two-stage instruction-tuning dataset ACCN-INS derived from rich classical Chinese corpora, aiming to unlock the full CCU potential of LLMs. Second, we propose Redundancy-Aware Tuning (RAT) to prevent catastrophic forgetting, enabling TongGu to acquire new capabilities while preserving its foundational knowledge. Third, we present a CCU Retrieval-Augmented Generation (CCU-RAG) technique to reduce hallucinations based on knowledge-grounding. Extensive experiments across 24 diverse CCU tasks validate TongGu's superior ability, underscoring the effectiveness of RAT and CCU-RAG. The model and dataset are available at \url{https://github.com/SCUT-DLVCLab/TongGu-LLM}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiahuan Cao (4 papers)
  2. Dezhi Peng (21 papers)
  3. Peirong Zhang (10 papers)
  4. Yongxin Shi (7 papers)
  5. Yang Liu (2253 papers)
  6. Kai Ding (29 papers)
  7. Lianwen Jin (116 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com