Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Large Language Models Are All-purpose Text Analytics Engines: Text-to-text Learning Is All Your Need (2312.06099v1)

Published 11 Dec 2023 in cs.CL

Abstract: Objective To solve major clinical NLP tasks using a unified text-to-text learning architecture based on a generative LLM via prompt tuning. Methods We formulated 7 key clinical NLP tasks as text-to-text learning and solved them using one unified generative clinical LLM, GatorTronGPT, developed using GPT-3 architecture and trained with up to 20 billion parameters. We adopted soft prompts (i.e., trainable vectors) with frozen LLM, where the LLM parameters were not updated (i.e., frozen) and only the vectors of soft prompts were updated, known as prompt tuning. We added additional soft prompts as a prefix to the input layer, which were optimized during the prompt tuning. We evaluated the proposed method using 7 clinical NLP tasks and compared them with previous task-specific solutions based on Transformer models. Results and Conclusion The proposed approach achieved state-of-the-art performance for 5 out of 7 major clinical NLP tasks using one unified generative LLM. Our approach outperformed previous task-specific transformer models by ~3% for concept extraction and 7% for relation extraction applied to social determinants of health, 3.4% for clinical concept normalization, 3.4~10% for clinical abbreviation disambiguation, and 5.5~9% for natural language inference. Our approach also outperformed a previously developed prompt-based machine reading comprehension (MRC) model, GatorTron-MRC, for clinical concept and relation extraction. The proposed approach can deliver the one model for all promise from training to deployment using a unified generative LLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Cheng Peng (177 papers)
  2. Xi Yang (160 papers)
  3. Aokun Chen (12 papers)
  4. Zehao Yu (41 papers)
  5. Kaleb E Smith (14 papers)
  6. Anthony B Costa (4 papers)
  7. Mona G Flores (6 papers)
  8. Jiang Bian (229 papers)
  9. Yonghui Wu (115 papers)
Citations (4)