Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AnyTaskTune: Advanced Domain-Specific Solutions through Task-Fine-Tuning (2407.07094v1)

Published 9 Jul 2024 in cs.CL and cs.AI

Abstract: The pervasive deployment of LLMs-LLMs in various sectors often neglects the nuanced requirements of individuals and small organizations, who benefit more from models precisely tailored to their specific business contexts rather than those with broadly superior general capabilities. This work introduces \textbf{AnyTaskTune}, a novel fine-tuning methodology coined as \textbf{Task-Fine-Tune}, specifically developed to elevate model performance on a diverse array of domain-specific tasks. This method involves a meticulous process to identify and define targeted sub-tasks within a domain, followed by the creation of specialized enhancement datasets for fine-tuning, thereby optimizing task-specific model performance. We conducted comprehensive fine-tuning experiments not only in the legal domain for tasks such as keyword extraction and sentence prediction but across over twenty different sub-tasks derived from the domains of finance, healthcare, law, psychology, consumer services, and human resources. To substantiate our approach and facilitate community engagement, we will open-source these bilingual task datasets. Our findings demonstrate that models fine-tuned using the \textbf{Task-Fine-Tune} methodology not only achieve superior performance on these specific tasks but also significantly outperform models with higher general capabilities in their respective domains. Our work is publicly available at \url{https://github.com/PandaVT/DataTager}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jiaxi Cui (13 papers)
  2. Wentao Zhang (261 papers)
  3. Jing Tang (108 papers)
  4. Xudong Tong (1 paper)
  5. Zhenwei Zhang (16 papers)
  6. Amie (1 paper)
  7. Jing Wen (18 papers)
  8. Rongsheng Wang (16 papers)
  9. Pengfei Wu (18 papers)