Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Circuit Transformer: End-to-end Circuit Design by Predicting the Next Gate (2403.13838v1)

Published 14 Mar 2024 in cs.LG and cs.AR

Abstract: Language, a prominent human ability to express through sequential symbols, has been computationally mastered by recent advances of LLMs. By predicting the next word recurrently with huge neural models, LLMs have shown unprecedented capabilities in understanding and reasoning. Circuit, as the "language" of electronic design, specifies the functionality of an electronic device by cascade connections of logic gates. Then, can circuits also be mastered by a a sufficiently large "circuit model", which can conquer electronic design tasks by simply predicting the next logic gate? In this work, we take the first step to explore such possibilities. Two primary barriers impede the straightforward application of LLMs to circuits: their complex, non-sequential structure, and the intolerance of hallucination due to strict constraints (e.g., equivalence). For the first barrier, we encode a circuit as a memory-less, depth-first traversal trajectory, which allows Transformer-based neural models to better leverage its structural information, and predict the next gate on the trajectory as a circuit model. For the second barrier, we introduce an equivalence-preserving decoding process, which ensures that every token in the generated trajectory adheres to the specified equivalence constraints. Moreover, the circuit model can also be regarded as a stochastic policy to tackle optimization-oriented circuit design tasks. Experimentally, we trained a Transformer-based model of 88M parameters, named "Circuit Transformer", which demonstrates impressive performance in end-to-end logic synthesis. With Monte-Carlo tree search, Circuit Transformer significantly improves over resyn2 while retaining strict equivalence, showcasing the potential of generative AI in conquering electronic design challenges.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Armin Biere. 2007. The AIGER And-Inverter Graph (AIG) Format Version 20071012. Technical Report 07/1. Institute for Formal Models and Verification, Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria.
  2. Chip-Chat: Challenges and Opportunities in Conversational Hardware Design. In 2023 ACM/IEEE 5th Workshop on Machine Learning for CAD (MLCAD). 1–6.
  3. Robert Brayton and Alan Mishchenko. 2010. ABC: An Academic Industrial-Strength Verification Tool. In Computer Aided Verification. Springer Berlin Heidelberg, Berlin, Heidelberg, 24–40.
  4. ChipGPT: How far are we from natural language hardware design. arXiv:2305.14019 [cs.AI]
  5. BOiLS: Bayesian Optimisation for Logic Synthesis. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). 1193–1196.
  6. Deep Learning for Logic Optimization Algorithms. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS). 1–4.
  7. DRiLLS: Deep Reinforcement Learning for Logic Synthesis. In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). 581–586.
  8. Machine Learning for Electronic Design Automation: A Survey. ACM Trans. Des. Autom. Electron. Syst. 26, 5, Article 40 (jun 2021), 46 pages.
  9. Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55, 12, Article 248 (mar 2023), 38 pages.
  10. ChipNeMo: Domain-Adapted LLMs for Chip Design. arXiv preprint arXiv:2311.00176 (2023).
  11. LSOracle: a Logic Synthesis Framework Driven by Artificial Intelligence: Invited Paper. In 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 1–6.
  12. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs].
  13. Logic synthesis meets machine learning: Trading exactness for generalization. In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 1026–1031.
  14. Neural Circuit Synthesis with Pre-trained Language Models. In First International Workshop on Deep Learning-aided Verification.
  15. Neural Circuit Synthesis from Specification Patterns. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 15408–15420.
  16. Vighnesh Shiv and Chris Quirk. 2019. Novel positional encodings to enable tree-based transformers. In Advances in Neural Information Processing Systems, Vol. 32. Curran Associates, Inc.
  17. Mastering the game of go without human knowledge. nature 550, 7676 (2017), 354–359.
  18. Gemini: A Family of Highly Capable Multimodal Models. arXiv:2312.11805 [cs.CL]
  19. Attention is All you Need. In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc.
  20. S. Wolfram. 2023. What Is ChatGPT Doing … and Why Does It Work? Wolfram Media, Incorporated.
  21. Developing Synthesis Flows without Human Knowledge. In Proceedings of the 55th Annual Design Automation Conference (San Francisco, California) (DAC ’18). Association for Computing Machinery, New York, NY, USA, Article 50, 6 pages.
  22. Dan Yu. 2023. Decoding LLM Hallucinations: Insights and Taming them for EDA Applications. https://blogs.sw.siemens.com/verificationhorizons/2023/06/15/decoding-llm-hallucinations/ Section: News.
  23. Exploring Logic Optimizations with Reinforcement Learning and Graph Convolutional Network. In 2020 ACM/IEEE 2nd Workshop on Machine Learning for CAD (MLCAD). 145–150.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xihan Li (12 papers)
  2. Xing Li (82 papers)
  3. Lei Chen (487 papers)
  4. Xing Zhang (104 papers)
  5. Mingxuan Yuan (81 papers)
  6. Jun Wang (992 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.