Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring the Trade-Offs: Unified Large Language Models vs Local Fine-Tuned Models for Highly-Specific Radiology NLI Task (2304.09138v1)

Published 18 Apr 2023 in cs.CL

Abstract: Recently, ChatGPT and GPT-4 have emerged and gained immense global attention due to their unparalleled performance in language processing. Despite demonstrating impressive capability in various open-domain tasks, their adequacy in highly specific fields like radiology remains untested. Radiology presents unique linguistic phenomena distinct from open-domain data due to its specificity and complexity. Assessing the performance of LLMs in such specific domains is crucial not only for a thorough evaluation of their overall performance but also for providing valuable insights into future model design directions: whether model design should be generic or domain-specific. To this end, in this study, we evaluate the performance of ChatGPT/GPT-4 on a radiology NLI task and compare it to other models fine-tuned specifically on task-related data samples. We also conduct a comprehensive investigation on ChatGPT/GPT-4's reasoning ability by introducing varying levels of inference difficulty. Our results show that 1) GPT-4 outperforms ChatGPT in the radiology NLI task; 2) other specifically fine-tuned models require significant amounts of data samples to achieve comparable performance to ChatGPT/GPT-4. These findings demonstrate that constructing a generic model that is capable of solving various tasks across different domains is feasible.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Zihao Wu (100 papers)
  2. Lu Zhang (373 papers)
  3. Chao Cao (104 papers)
  4. Xiaowei Yu (36 papers)
  5. Haixing Dai (39 papers)
  6. Chong Ma (28 papers)
  7. Zhengliang Liu (91 papers)
  8. Lin Zhao (227 papers)
  9. Gang Li (579 papers)
  10. Wei Liu (1135 papers)
  11. Quanzheng Li (122 papers)
  12. Dinggang Shen (153 papers)
  13. Xiang Li (1002 papers)
  14. Dajiang Zhu (68 papers)
  15. Tianming Liu (161 papers)
Citations (38)