Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Task Oriented In-Domain Data Augmentation (2406.16694v1)

Published 24 Jun 2024 in cs.CL

Abstract: LLMs have shown superior performance in various applications and fields. To achieve better performance on specialized domains such as law and advertisement, LLMs are often continue pre-trained on in-domain data. However, existing approaches suffer from two major issues. First, in-domain data are scarce compared with general domain-agnostic data. Second, data used for continual pre-training are not task-aware, such that they may not be helpful to downstream applications. We propose TRAIT, a task-oriented in-domain data augmentation framework. Our framework is divided into two parts: in-domain data selection and task-oriented synthetic passage generation. The data selection strategy identifies and selects a large amount of in-domain data from general corpora, and thus significantly enriches domain knowledge in the continual pre-training data. The synthetic passages contain guidance on how to use domain knowledge to answer questions about downstream tasks. By training on such passages, the model aligns with the need of downstream applications. We adapt LLMs to two domains: advertisement and math. On average, TRAIT improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiao Liang (132 papers)
  2. Xinyu Hu (32 papers)
  3. Simiao Zuo (25 papers)
  4. Yeyun Gong (78 papers)
  5. Qiang Lou (4 papers)
  6. Yi Liu (543 papers)
  7. Shao-Lun Huang (48 papers)
  8. Jian Jiao (44 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com