Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLM-Aided Efficient Hardware Design Automation (2410.18582v1)

Published 24 Oct 2024 in eess.SY and cs.SY

Abstract: With the rapidly increasing complexity of modern chips, hardware engineers are required to invest more effort in tasks such as circuit design, verification, and physical implementation. These workflows often involve continuous modifications, which are labor-intensive and prone to errors. Therefore, there is an increasing need for more efficient and cost-effective Electronic Design Automation (EDA) solutions to accelerate new hardware development. Recently, LLMs have made significant advancements in contextual understanding, logical reasoning, and response generation. Since hardware designs and intermediate scripts can be expressed in text format, it is reasonable to explore whether integrating LLMs into EDA could simplify and fully automate the entire workflow. Accordingly, this paper discusses such possibilities in several aspects, covering hardware description language (HDL) generation, code debugging, design verification, and physical implementation. Two case studies, along with their future outlook, are introduced to highlight the capabilities of LLMs in code repair and testbench generation. Finally, future directions and challenges are highlighted to further explore the potential of LLMs in shaping the next-generation EDA

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Tinghuan Chen, Grace Li Zhang, Bei Yu, Bing Li, Ulf Schlichtmann, “Machine Learning in Advanced IC Design: A Methodological Survey,” IEEE Design & Test, 2023.
  2. Christopher Batten, Nathaniel Pinckney, Mingjie Liu, Haoxing Ren, Brucek Khailany, “PyHDL-Eval: An LLM Evaluation Framework for Hardware Design Using Python-Embedded DSLs,” IEEE/ACM International Symposium on Machine Learning for Computer-Aided Design (MLCAD), 2024.
  3. Jason Blocklove, Siddharth Garg, Ramesh Karri, Hammond Pearce, “Chip-Chat: Challenges and Opportunities in Conversational Hardware Design,” IEEE/ACM Workshop on Machine Learning for Computer-Aided Design (MLCAD), 2023.
  4. Andre Nakkab, Sai Qian Zhang, Ramesh Karri, Siddharth Garg, “Rome was Not Built in a Single Step: Hierarchical Prompting for LLM-based Chip Design,” IEEE/ACM International Symposium on Machine Learning for Computer-Aided Design (MLCAD), 2024.
  5. Shailja Thakur, Baleegh Ahmad, Zhenxing Fan, Hammond Pearce, Benjamin Tan, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg, “Benchmarking Large Language Models for Automated Verilog RTL Code Generation,” IEEE/ACM Design, Automation & Test in Europe Conference & Exhibition (DATE), 2023.
  6. Mingjie Liu, Nathaniel Pinckney, Brucek Khailany, Haoxing Ren, “VerilogEval: Evaluating large language models for verilog code generation,” IEEE/ACM International Conference on Computer Aided Design (ICCAD), 2023.
  7. Shailja Thakur, Jason Blocklove, Hammond Pearce, Benjamin Tan, Siddharth Garg, Ramesh Karri, “AutoChip: Automating HDL Generation Using LLM Feedback,” arXiv preprint: 2311.04887, 2023.
  8. Yonggan Fu, Yongan Zhang, Zhongzhi Yu, Sixu Li, Zhifan Ye, Chaojian Li, Cheng Wan, Yingyan Lin, “GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models,” IEEE/ACM International Conference on Computer Aided Design (ICCAD), 2023.
  9. Jason Blocklove, Siddharth Garg, Ramesh Karri, Hammond Pearce, “Evaluating LLMs for Hardware Design and Test,” arXiv preprint: 2405.02326, 2024.
  10. Wenhao Sun, Bing Li, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, “Classification-Based Automatic HDL Code Generation Using LLMs,” arXiv preprint: 2407.18326, 2024.
  11. Kangwei Xu, Grace Li Zhang, Ulf Schlichtmann, Bing Li, “Logic Design of Neural Networks for High-Throughput and Low-Power Applications,” IEEE/ACM Asia and South Pacific Design Automation Conference (ASP-DAC), 2024.
  12. Yunwei Mao, You You, Xiaosi Tan, Yongming Huang, Xiaohu You, Chuan Zhang, “FLAG: Formula-LLM-Based Auto-Generator for Baseband Hardware,” IEEE International Symposium on Circuits and Systems (ISCAS), Singapore, Singapore, 2024
  13. Lily Jiaxin Wan, Yingbing Huang, Yuhong Li, Hanchen Ye, Jinghua Wang, Xiaofan Zhang, Deming Chen, “Software/Hardware Co-design for LLM and Its Application for Design Verification,” IEEE/ACM Asia and South Pacific Design Automation Conference (ASP-DAC), 2024.
  14. Mingjie Liu, Yun-Da Tsai, Wenfei Zhou, Haoxing Ren, “CraftRTL: High-quality Synthetic Data Generation for Verilog Code Models with Correct-by-Construction Non-Textual Representations and Targeted Code Repair,” arXiv preprint: 2409.12993, 2024.
  15. Luca Collini, Siddharth Garg, Ramesh Karri, “C2HLSC: Can LLMs Bridge the Software-to-Hardware Design Gap?,” IEEE International Workshop on LLM-Aided Design (LAD), 2024.
  16. Yun-Da Tsai, Mingjie Liu, Haoxing Ren, “Automatically Fixing RTL Syntax Errors with Large Language Model,” IEEE/ACM Design Automation Conference (DAC), 2024.
  17. Yunsheng Bai, Atefeh Sohrabizadeh, Zongyue Qin, Ziniu Hu, Yizhou Sun, Jason Cong, “Towards a Comprehensive Benchmark for High-Level Synthesis Targeted to FPGAs,” Advances in Neural Information Processing Systems (NeurIPS), 2023.
  18. Atefeh Sohrabizadeh, Cody Hao Yu, Min Gao, and Jason Cong, “AutoDSE: Enabling Software Programmers to Design Efficient FPGA Accelerators,” ACM Transactions on Design Automation of Electronic Systems (TODAES), 2022.
  19. Stefan Abi-Karam, Rishov Sarkar, Allison Seigler, Sean Lowe, Zhigang Wei, Hanqiu Chen, Nanditha Rao, Lizy John, Aman Arora, Cong Hao, “HLSFactory: A Framework Empowering High-Level Synthesis Datasets for Machine Learning and Beyond,” IEEE/ACM International Symposium on Machine Learning for Computer-Aided Design (MLCAD), 2024.
  20. Kangwei Xu, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, and Bing Li, “Automated C/C++ Program Repair for High-Level Synthesis via Large Language Models,” IEEE/ACM International Symposium on Machine Learning for Computer-Aided Design (MLCAD), 2024.
  21. Nils Reimers, Iryna Gurevych, “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks,” ACL Empirical Methods in Natural Language Processing (EMNLP), 2019.
  22. Wenji Fang, Mengming Li, Min Li, Zhiyuan Yan, Shang Liu, Hongce Zhang, Zhiyao Xie, “AssertLLM: Generating and Evaluating Hardware Verification Assertions from Design Specifications via Multi-LLMs,” arXiv preprint: 2402.00386, 2024.
  23. Ke Xu, Jialin Sun, Yuchen Hu, Xinwei Fang, Weiwei Shan, Xi Wang, Zhenling Jiang, “MEIC: Re-thinking RTL Debug Automation using LLMs,” IEEE/ACM International Conference on Computer Aided Design (ICCAD), 2024.
  24. Zixi Zhang, Greg Chadwick, Hugo McNally, Yiren Zhao, Robert Mullins, “LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation,” Advances in Neural Information Processing Systems (NeurIPS) Workshop, 2022.
  25. Ruidi Qiu, Grace Li Zhang, Rolf Drechsler, Ulf Schlichtmann, Bing Li, “AutoBench: Automatic Testbench Generation and Evaluation Using LLMs for HDL Design,” IEEE/ACM International Symposium on Machine Learning for Computer-Aided Design (MLCAD), 2024.
  26. Marcelo Orenes-Vera, Margaret Martonosi, David Wentzlaff, “Using LLMs to Facilitate Formal Verification of RTL,” arXiv preprint: 2309.09437, 2023.
  27. Ruiyang Ma, Yuxin Yang, Ziqian Liu, Jiaxi Zhang, Min Li, Junhua Huang, Guojie Luo, “VerilogReader: LLM-Aided Hardware Test Generation,” IEEE International Workshop on LLM-Aided Design (LAD), 2024.
  28. Chuangtao Chen, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, Bing Li, “LiveMind: Low-latency Large Language Models with Simultaneous Inference,” arXiv preprint: 2406.14319, 2024.
  29. Kiran Thorat, Jiahui Zhao, Yaotian Liu, Hongwu Peng, Xi Xie, Bin Lei, Jeff Zhang, Caiwen Ding, “Advanced Large Language Model (LLM)-Driven Verilog Development: Enhancing Power, Performance, and Area Optimization in Code Synthesis,” arXiv preprint: 2312.01022, 2023.
  30. Zhuolun He, Haoyuan Wu, Xinyun Zhang, Xufeng Yao, Su Zheng, Haisheng Zheng, Bei Yu, “ChatEDA: A Large Language Model Powered Autonomous Agent for EDA,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2024.
  31. Chia-Tung Ho, Haoxing Ren, “Large Language Model (LLM) for Standard Cell Layout Design Optimization,” IEEE International Workshop on LLM-Aided Design (LAD), 2024.
  32. Bingyang Liu, Haoyi Zhang, Xiaohan Gao, Zichen Kong, Xiyuan Tang, Yibo Lin, Runsheng Wang, Ru Huang, “LayoutCopilot: An LLM-powered Multi-agent Collaborative Framework for Interactive Analog Layout Design,” arXiv preprint: 2406.18873, 2024.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kangwei Xu (5 papers)
  2. Ruidi Qiu (5 papers)
  3. Zhuorui Zhao (2 papers)
  4. Grace Li Zhang (27 papers)
  5. Ulf Schlichtmann (46 papers)
  6. Bing Li (374 papers)
Citations (1)