Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating multiple large language models in pediatric ophthalmology (2311.04368v1)

Published 7 Nov 2023 in cs.CL

Abstract: IMPORTANCE The response effectiveness of different LLMs and various individuals, including medical students, graduate students, and practicing physicians, in pediatric ophthalmology consultations, has not been clearly established yet. OBJECTIVE Design a 100-question exam based on pediatric ophthalmology to evaluate the performance of LLMs in highly specialized scenarios and compare them with the performance of medical students and physicians at different levels. DESIGN, SETTING, AND PARTICIPANTS This survey study assessed three LLMs, namely ChatGPT (GPT-3.5), GPT-4, and PaLM2, were assessed alongside three human cohorts: medical students, postgraduate students, and attending physicians, in their ability to answer questions related to pediatric ophthalmology. It was conducted by administering questionnaires in the form of test papers through the LLM network interface, with the valuable participation of volunteers. MAIN OUTCOMES AND MEASURES Mean scores of LLM and humans on 100 multiple-choice questions, as well as the answer stability, correlation, and response confidence of each LLM. RESULTS GPT-4 performed comparably to attending physicians, while ChatGPT (GPT-3.5) and PaLM2 outperformed medical students but slightly trailed behind postgraduate students. Furthermore, GPT-4 exhibited greater stability and confidence when responding to inquiries compared to ChatGPT (GPT-3.5) and PaLM2. CONCLUSIONS AND RELEVANCE Our results underscore the potential for LLMs to provide medical assistance in pediatric ophthalmology and suggest significant capacity to guide the education of medical students.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Jason Holmes (19 papers)
  2. Rui Peng (79 papers)
  3. Yiwei Li (107 papers)
  4. Jinyu Hu (4 papers)
  5. Zhengliang Liu (91 papers)
  6. Zihao Wu (100 papers)
  7. Huan Zhao (109 papers)
  8. Xi Jiang (53 papers)
  9. Wei Liu (1135 papers)
  10. Hong Wei (10 papers)
  11. Jie Zou (32 papers)
  12. Tianming Liu (161 papers)
  13. Yi Shao (8 papers)