Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ChipGPT: How far are we from natural language hardware design (2305.14019v3)

Published 23 May 2023 in cs.AI, cs.AR, and cs.PL

Abstract: As LLMs like ChatGPT exhibited unprecedented machine intelligence, it also shows great performance in assisting hardware engineers to realize higher-efficiency logic design via natural language interaction. To estimate the potential of the hardware design process assisted by LLMs, this work attempts to demonstrate an automated design environment that explores LLMs to generate hardware logic designs from natural language specifications. To realize a more accessible and efficient chip development flow, we present a scalable four-stage zero-code logic design framework based on LLMs without retraining or finetuning. At first, the demo, ChipGPT, begins by generating prompts for the LLM, which then produces initial Verilog programs. Second, an output manager corrects and optimizes these programs before collecting them into the final design space. Eventually, ChipGPT will search through this space to select the optimal design under the target metrics. The evaluation sheds some light on whether LLMs can generate correct and complete hardware logic designs described by natural language for some specifications. It is shown that ChipGPT improves programmability, and controllability, and shows broader design optimization space compared to prior work and native LLMs alone.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. J. Bachrach, H. Vo, B. Richards, Y. Lee, A. Waterman, R. Avižienis, J. Wawrzynek, and K. Asanović, “Chisel: constructing hardware in a scala embedded language,” Design Automation Conference, pp. 1216–1225, 2012.
  2. G. Martin and G. Smith, “High-level synthesis: Past, present, and future,” IEEE Design & Test of Computers, vol. 26, no. 4, pp. 18–25, 2009.
  3. Y.-H. Lai, H. Rong, S. Zheng, W. Zhang, X. Cui, Y. Jia, J. Wang, B. Sullivan, Z. Zhang, Y. Liang, et al., “Susy: A programming model for productive construction of high-performance systolic arrays on fpgas,” International Conference on Computer-Aided Design, pp. 1–9, 2020.
  4. D. Koeplinger, M. Feldman, R. Prabhakar, Y. Zhang, S. Hadjis, R. Fiszel, T. Zhao, L. Nardi, A. Pedram, C. Kozyrakis, and K. Olukotun, “Spatial: A language and compiler for application accelerators,” SIGPLAN Not., vol. 53, p. 296–311, jun 2018.
  5. H. Ye, C. Hao, J. Cheng, H. Jeong, J. Huang, S. Neuendorffer, and D. Chen, “Scalehls: A new scalable high-level synthesis framework on multi-level intermediate representation,” IEEE International Symposium on High-Performance Computer Architecture, pp. 741–755, 2022.
  6. T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan, L. Wang, Y. Hu, L. Ceze, C. Guestrin, and A. Krishnamurthy, “TVM: An automated End-to-End optimizing compiler for deep learning,” USENIX Symposium on Operating Systems Design and Implementation, pp. 578–594, 2018.
  7. L. community, “Circuit ir compilers and tools.” https://github.com/llvm/circt, 2023.
  8. A. Magyar, A. Kalantar, B. Christian, Blaok, and B. J. et al., “Xls: Accelerated hw synthesis.” https://github.com/google/xls/, 2023.
  9. P. Faymonville, B. Finkbeiner, and L. Tentrup, “Bosy: An experimentation framework for bounded synthesis,” Computer Aided Verification: 29th International Conference, pp. 325–332, 2017.
  10. P.-M. Osera and S. Zdancewic, “Type-and-example-directed program synthesis,” SIGPLAN Not., vol. 50, p. 619–630, jun 2015.
  11. Z. Manna and R. Waldinger, “Synthesis: Dreams → programs,” IEEE Transactions on Software Engineering, vol. SE-5, no. 4, pp. 294–328, 1979.
  12. R. Alur, R. Bodik, G. Juniwal, M. M. K. Martin, M. Raghothaman, S. A. Seshia, R. Singh, A. Solar-Lezama, E. Torlak, and A. Udupa, “Syntax-guided synthesis,” Formal Methods in Computer-Aided Design, pp. 1–8, 2013.
  13. Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, and Z. Sui, “A survey for in-context learning,” arXiv preprint arXiv:2301.00234, 2022.
  14. E. J. Hu, yelong shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-rank adaptation of large language models,” International Conference on Learning Representations, pp. 1–13, 2022.
  15. J. K. Feser, S. Chaudhuri, and I. Dillig, “Synthesizing data structure transformations from input-output examples,” ACM SIGPLAN Conference on Programming Language Design and Implementation, p. 229–239, 2015.
  16. B. Finkbeiner and S. Schewe, “Bounded synthesis,” International Journal on Software Tools for Technology Transfer, vol. 15, p. 519–539, oct 2013.
  17. Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, B. Qin, T. Liu, D. Jiang, et al., “Codebert: A pre-trained model for programming and natural languages,” Conference on Empirical Methods in Natural Language Processing, pp. 1536–1547, 2020.
  18. Y. Wang, W. Wang, S. Joty, and S. C. Hoi, “Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation,” Conference on Empirical Methods in Natural Language Processing, pp. 8696–8708, 2021.
  19. D. Guo, S. Ren, S. Lu, Z. Feng, D. Tang, S. LIU, L. Zhou, N. Duan, A. Svyatkovskiy, S. Fu, M. Tufano, S. K. Deng, C. Clement, D. Drain, N. Sundaresan, J. Yin, D. Jiang, and M. Zhou, “Graphcodebert: Pre-training code representations with data flow,” International Conference on Learning Representations, pp. 1–18, 2021.
  20. D. Guo, S. Lu, N. Duan, Y. Wang, M. Zhou, and J. Yin, “Unixcoder: Unified cross-modal pre-training for code representation,” Annual Meeting of the Association for Computational Linguistics, pp. 7212–7225, 2022.
  21. J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” Advances in neural information processing systems, vol. 27, pp. 1–9, 2014.
  22. N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly, “Parameter-efficient transfer learning for nlp,” International Conference on Machine Learning, pp. 2790–2799, 2019.
  23. Y. Zhou, A. I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan, and J. Ba, “Large language models are human-level prompt engineers,” International Conference on Learning Representations, pp. 1–43, 2023.
  24. J. Wei, X. Wang, D. Schuurmans, M. Bosma, brian ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, “Chain of thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, pp. 1–14, 2022.
  25. D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet, Q. V. Le, and E. H. Chi, “Least-to-most prompting enables complex reasoning in large language models,” International Conference on Learning Representations, pp. 1–61, 2023.
  26. C. Wu, S. Yin, W. Qi, X. Wang, Z. Tang, and N. Duan, “Visual chatgpt: Talking, drawing and editing with visual foundation models,” arXiv preprint arXiv:2303.04671, 2023.
  27. Anthropic, “Claude in slack.” https://www.anthropic.com/claude-in-slack, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Kaiyan Chang (10 papers)
  2. Ying Wang (366 papers)
  3. Haimeng Ren (3 papers)
  4. Mengdi Wang (199 papers)
  5. Shengwen Liang (11 papers)
  6. Yinhe Han (23 papers)
  7. Huawei Li (39 papers)
  8. Xiaowei Li (63 papers)
Citations (56)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com