Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zero-Shot RTL Code Generation with Attention Sink Augmented Large Language Models (2401.08683v1)

Published 12 Jan 2024 in cs.AR, cs.AI, cs.LG, cs.PL, and cs.SE

Abstract: The design and optimization of hardware have traditionally been resource-intensive, demanding considerable expertise and dependence on established design automation tools. This paper discusses the possibility of exploiting LLMs to streamline the code generation process in hardware design. In contrast to earlier studies, this paper aims to use LLMs that accepts high-level design specifications through a single prompt to generate corresponding Register-Transfer Level (RTL) code. The ability to use LLMs on RTL code generation not only expedites design iteration cycles but also facilitates the exploration of design spaces that have computational challenges for conventional techniques. Through our evaluation, we demonstrate the shortcoming of existing attention mechanisms, and present the abilities of LLMs to produce functional, optimized, and industry-standard compliant RTL code when a novel attention mechanism is used. These findings underscore the expanding role of LLMs in shaping the future landscape of architectural exploration and automation in hardware design.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Selim Sandal (1 paper)
  2. Ismail Akturk (10 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com