Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model (2308.05345v3)

Published 10 Aug 2023 in cs.LG and cs.AR

Abstract: Inspired by the recent success of LLMs like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yao Lu (212 papers)
  2. Shang Liu (68 papers)
  3. Qijun Zhang (11 papers)
  4. Zhiyao Xie (30 papers)
Citations (58)

Summary

We haven't generated a summary for this paper yet.