Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AutoVCoder: A Systematic Framework for Automated Verilog Code Generation using LLMs (2407.18333v1)

Published 21 Jul 2024 in cs.AR and cs.AI

Abstract: Recently, the use of LLMs for software code generation, e.g., C/C++ and Python, has proven a great success. However, LLMs still suffer from low syntactic and functional correctness when it comes to the generation of register-transfer level (RTL) code, such as Verilog. To address this issue, in this paper, we develop AutoVCoder, a systematic open-source framework that significantly improves the LLMs' correctness of generating Verilog code and enhances the quality of its output at the same time. Our framework integrates three novel techniques, including a high-quality hardware dataset generation approach, a two-round LLM fine-tuning method and a domain-specific retrieval-augmented generation (RAG) mechanism. Experimental results demonstrate that AutoVCoder outperforms both industrial and academic LLMs in Verilog code generation. Specifically, AutoVCoder shows a 0.5% and 2.2% improvement in functional correctness on the EvalMachine and EvalHuman benchmarks compared with BetterV, and also achieves a 3.4% increase in syntax correctness and a 3.4% increase in functional correctness on the RTLLM benchmark compared with RTLCoder.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Mingzhe Gao (2 papers)
  2. Jieru Zhao (28 papers)
  3. Zhe Lin (163 papers)
  4. Wenchao Ding (33 papers)
  5. Xiaofeng Hou (10 papers)
  6. Yu Feng (216 papers)
  7. Chao Li (429 papers)
  8. Minyi Guo (98 papers)