Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Make Every Move Count: LLM-based High-Quality RTL Code Generation Using MCTS (2402.03289v1)

Published 5 Feb 2024 in cs.LG, cs.AI, and cs.AR

Abstract: Existing LLMs for register transfer level code generation face challenges like compilation failures and suboptimal power, performance, and area (PPA) efficiency. This is due to the lack of PPA awareness in conventional transformer decoding algorithms. In response, we present an automated transformer decoding algorithm that integrates Monte Carlo tree-search for lookahead, guiding the transformer to produce compilable, functionally correct, and PPA-optimized code. Empirical evaluation with a fine-tuned LLM on RTL codesets shows that our proposed technique consistently generates functionally correct code compared to prompting-only methods and effectively addresses the PPA-unawareness drawback of naive LLMs. For the largest design generated by the state-of-the-art LLM (16-bit adder), our technique can achieve a 31.8% improvement in the area-delay product.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Matthew DeLorenzo (5 papers)
  2. Animesh Basak Chowdhury (15 papers)
  3. Vasudev Gohil (11 papers)
  4. Shailja Thakur (12 papers)
  5. Ramesh Karri (92 papers)
  6. Siddharth Garg (99 papers)
  7. Jeyavijayan Rajendran (19 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.