Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LM4HPC: Towards Effective Language Model Application in High-Performance Computing (2306.14979v1)

Published 26 Jun 2023 in cs.LG and cs.DC

Abstract: In recent years, LLMs (LMs), such as GPT-4, have been widely used in multiple domains, including natural language processing, visualization, and so on. However, applying them for analyzing and optimizing high-performance computing (HPC) software is still challenging due to the lack of HPC-specific support. In this paper, we design the LM4HPC framework to facilitate the research and development of HPC software analyses and optimizations using LMs. Tailored for supporting HPC datasets, AI models, and pipelines, our framework is built on top of a range of components from different levels of the machine learning software stack, with Hugging Face-compatible APIs. Using three representative tasks, we evaluated the prototype of our framework. The results show that LM4HPC can help users quickly evaluate a set of state-of-the-art models and generate insightful leaderboards.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pei-Hung Lin (16 papers)
  2. Tristan Vanderbruggen (7 papers)
  3. Chunhua Liao (16 papers)
  4. Murali Emani (17 papers)
  5. Bronis de Supinski (2 papers)
  6. le Chen (71 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.