Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Assessing Large Language Models in Mechanical Engineering Education: A Study on Mechanics-Focused Conceptual Understanding (2401.12983v1)

Published 13 Jan 2024 in cs.CL, cs.AI, and physics.ed-ph

Abstract: This study is a pioneering endeavor to investigate the capabilities of LLMs in addressing conceptual questions within the domain of mechanical engineering with a focus on mechanics. Our examination involves a manually crafted exam encompassing 126 multiple-choice questions, spanning various aspects of mechanics courses, including Fluid Mechanics, Mechanical Vibration, Engineering Statics and Dynamics, Mechanics of Materials, Theory of Elasticity, and Continuum Mechanics. Three LLMs, including ChatGPT (GPT-3.5), ChatGPT (GPT-4), and Claude (Claude-2.1), were subjected to evaluation against engineering faculties and students with or without mechanical engineering background. The findings reveal GPT-4's superior performance over the other two LLMs and human cohorts in answering questions across various mechanics topics, except for Continuum Mechanics. This signals the potential future improvements for GPT models in handling symbolic calculations and tensor analyses. The performances of LLMs were all significantly improved with explanations prompted prior to direct responses, underscoring the crucial role of prompt engineering. Interestingly, GPT-3.5 demonstrates improved performance with prompts covering a broader domain, while GPT-4 excels with prompts focusing on specific subjects. Finally, GPT-4 exhibits notable advancements in mitigating input bias, as evidenced by guessing preferences for humans. This study unveils the substantial potential of LLMs as highly knowledgeable assistants in both mechanical pedagogy and scientific research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Jie Tian (28 papers)
  2. Jixin Hou (7 papers)
  3. Zihao Wu (100 papers)
  4. Peng Shu (34 papers)
  5. Zhengliang Liu (91 papers)
  6. Yujie Xiang (2 papers)
  7. Beikang Gu (2 papers)
  8. Nicholas Filla (3 papers)
  9. Yiwei Li (107 papers)
  10. Ning Liu (199 papers)
  11. Xianyan Chen (7 papers)
  12. Keke Tang (22 papers)
  13. Tianming Liu (161 papers)
  14. Xianqiao Wang (15 papers)
Citations (5)