Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RTLCoder: Fully Open-Source and Efficient LLM-Assisted RTL Code Generation Technique (2312.08617v4)

Published 14 Dec 2023 in cs.PL and cs.AR

Abstract: The automatic generation of RTL code (e.g., Verilog) using natural language instructions and LLMs has attracted significant research interest recently. However, most existing approaches heavily rely on commercial LLMs such as ChatGPT, while open-source LLMs tailored for this specific design generation task exhibit notably inferior performance. The absence of high-quality open-source solutions restricts the flexibility and data privacy of this emerging technique. In this study, we present a new customized LLM solution with a modest parameter count of only 7B, achieving better performance than GPT-3.5 on all representative benchmarks for RTL code generation. Especially, it outperforms GPT-4 in VerilogEval Machine benchmark. This remarkable balance between accuracy and efficiency is made possible by leveraging our new RTL code dataset and a customized LLM algorithm, both of which have been made fully open-source. Furthermore, we have successfully quantized our LLM to 4-bit with a total size of 4GB, enabling it to function on a single laptop with only slight performance degradation. This efficiency allows the RTL generator to serve as a local assistant for engineers, ensuring all design privacy concerns are addressed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shang Liu (68 papers)
  2. Wenji Fang (13 papers)
  3. Yao Lu (212 papers)
  4. Qijun Zhang (11 papers)
  5. Hongce Zhang (10 papers)
  6. Zhiyao Xie (30 papers)
  7. Jing Wang (740 papers)
Citations (32)
X Twitter Logo Streamline Icon: https://streamlinehq.com