Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scope is all you need: Transforming LLMs for HPC Code (2308.09440v3)

Published 18 Aug 2023 in cs.CL and cs.PL

Abstract: With easier access to powerful compute resources, there is a growing trend in the field of AI for software development to develop larger and larger LLMs to address a variety of programming tasks. Even LLMs applied to tasks from the high-performance computing (HPC) domain are huge in size (e.g., billions of parameters) and demand expensive compute resources for training. We found this design choice confusing - why do we need large LLMs trained on natural languages and programming languages unrelated to HPC for HPC-specific tasks? In this line of work, we aim to question design choices made by existing LLMs by developing smaller LLMs for specific domains - we call them domain-specific LLMs. Specifically, we start off with HPC as a domain and propose a novel tokenizer named Tokompiler, designed specifically for preprocessing code in HPC and compilation-centric tasks. Tokompiler leverages knowledge of language primitives to generate language-oriented tokens, providing a context-aware understanding of code structure while avoiding human semantics attributed to code structures completely. We applied Tokompiler to pre-train two state-of-the-art models, SPT-Code and Polycoder, for a Fortran code corpus mined from GitHub. We evaluate the performance of these models against the conventional LLMs. Results demonstrate that Tokompiler significantly enhances code completion accuracy and semantic understanding compared to traditional tokenizers in normalized-perplexity tests, down to ~1 perplexity score. This research opens avenues for further advancements in domain-specific LLMs, catering to the unique demands of HPC and compilation tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Tal Kadosh (7 papers)
  2. Niranjan Hasabnis (21 papers)
  3. Vy A. Vo (11 papers)
  4. Nadav Schneider (9 papers)
  5. Neva Krien (3 papers)
  6. Abdul Wasay (4 papers)
  7. Nesreen Ahmed (18 papers)
  8. Ted Willke (13 papers)
  9. Guy Tamir (5 papers)
  10. Yuval Pinter (41 papers)
  11. Timothy Mattson (11 papers)
  12. Gal Oren (38 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.