Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can Large Language Models Write Parallel Code? (2401.12554v3)

Published 23 Jan 2024 in cs.DC and cs.AI

Abstract: LLMs are increasingly becoming a popular tool for software development. Their ability to model and generate source code has been demonstrated in a variety of contexts, including code completion, summarization, translation, and lookup. However, they often struggle to generate code for complex programs. In this paper, we study the capabilities of state-of-the-art LLMs to generate parallel code. In order to evaluate LLMs, we create a benchmark, ParEval, consisting of prompts that represent 420 different coding tasks related to scientific and parallel computing. We use ParEval to evaluate the effectiveness of several state-of-the-art open- and closed-source LLMs on these tasks. We introduce novel metrics for evaluating the performance of generated code, and use them to explore how well each LLM performs for 12 different computational problem types and six different parallel programming models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Daniel Nichols (10 papers)
  2. Joshua H. Davis (3 papers)
  3. Zhaojun Xie (1 paper)
  4. Arjun Rajaram (3 papers)
  5. Abhinav Bhatele (33 papers)
Citations (12)
X Twitter Logo Streamline Icon: https://streamlinehq.com

HackerNews