Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Bibliometric Review of Large Language Models Research from 2017 to 2023 (2304.02020v1)

Published 3 Apr 2023 in cs.DL, cs.CL, cs.CY, and cs.SI

Abstract: LLMs are a class of LLMs that have demonstrated outstanding performance across a range of NLP tasks and have become a highly sought-after research area, because of their ability to generate human-like language and their potential to revolutionize science and technology. In this study, we conduct bibliometric and discourse analyses of scholarly literature on LLMs. Synthesizing over 5,000 publications, this paper serves as a roadmap for researchers, practitioners, and policymakers to navigate the current landscape of LLMs research. We present the research trends from 2017 to early 2023, identifying patterns in research paradigms and collaborations. We start with analyzing the core algorithm developments and NLP tasks that are fundamental in LLMs research. We then investigate the applications of LLMs in various fields and domains including medicine, engineering, social science, and humanities. Our review also reveals the dynamic, fast-paced evolution of LLMs research. Overall, this paper offers valuable insights into the current state, impact, and potential of LLMs research and its applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lizhou Fan (23 papers)
  2. Lingyao Li (38 papers)
  3. Zihui Ma (9 papers)
  4. Sanggyu Lee (2 papers)
  5. Huizi Yu (6 papers)
  6. Libby Hemphill (33 papers)
Citations (111)