Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HDLdebugger: Streamlining HDL debugging with Large Language Models (2403.11671v1)

Published 18 Mar 2024 in cs.AR, cs.AI, cs.CE, cs.LG, and cs.SE

Abstract: In the domain of chip design, Hardware Description Languages (HDLs) play a pivotal role. However, due to the complex syntax of HDLs and the limited availability of online resources, debugging HDL codes remains a difficult and time-intensive task, even for seasoned engineers. Consequently, there is a pressing need to develop automated HDL code debugging models, which can alleviate the burden on hardware engineers. Despite the strong capabilities of LLMs in generating, completing, and debugging software code, their utilization in the specialized field of HDL debugging has been limited and, to date, has not yielded satisfactory results. In this paper, we propose an LLM-assisted HDL debugging framework, namely HDLdebugger, which consists of HDL debugging data generation via a reverse engineering approach, a search engine for retrieval-augmented generation, and a retrieval-augmented LLM fine-tuning approach. Through the integration of these components, HDLdebugger can automate and streamline HDL debugging for chip design. Our comprehensive experiments, conducted on an HDL code dataset sourced from Huawei, reveal that HDLdebugger outperforms 13 cutting-edge LLM baselines, displaying exceptional effectiveness in HDL code debugging.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xufeng Yao (8 papers)
  2. Haoyang Li (95 papers)
  3. Tsz Ho Chan (30 papers)
  4. Wenyi Xiao (12 papers)
  5. Mingxuan Yuan (81 papers)
  6. Yu Huang (176 papers)
  7. Lei Chen (485 papers)
  8. Bei Yu (113 papers)
Citations (14)
X Twitter Logo Streamline Icon: https://streamlinehq.com