Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Interdisciplinary Outlook on Large Language Models for Scientific Research (2311.04929v1)

Published 3 Nov 2023 in cs.CL, cs.AI, cs.DL, and cs.LG

Abstract: In this paper, we describe the capabilities and constraints of LLMs within disparate academic disciplines, aiming to delineate their strengths and limitations with precision. We examine how LLMs augment scientific inquiry, offering concrete examples such as accelerating literature review by summarizing vast numbers of publications, enhancing code development through automated syntax correction, and refining the scientific writing process. Simultaneously, we articulate the challenges LLMs face, including their reliance on extensive and sometimes biased datasets, and the potential ethical dilemmas stemming from their use. Our critical discussion extends to the varying impacts of LLMs across fields, from the natural sciences, where they help model complex biological sequences, to the social sciences, where they can parse large-scale qualitative data. We conclude by offering a nuanced perspective on how LLMs can be both a boon and a boundary to scientific progress.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. James Boyko (1 paper)
  2. Joseph Cohen (5 papers)
  3. Nathan Fox (14 papers)
  4. Maria Han Veiga (17 papers)
  5. Jennifer I-Hsiu Li (30 papers)
  6. Jing Liu (525 papers)
  7. Bernardo Modenesi (5 papers)
  8. Andreas H. Rauch (2 papers)
  9. Kenneth N. Reid (4 papers)
  10. Soumi Tribedi (3 papers)
  11. Anastasia Visheratina (2 papers)
  12. Xin Xie (81 papers)
Citations (16)