Assessing Large Language Models in Generating RTL Design Specifications (2512.00045v1)
Abstract: As IC design grows more complex, automating comprehension and documentation of RTL code has become increasingly important. Engineers currently should manually interpret existing RTL code and write specifications, a slow and error-prone process. Although LLMs have been studied for generating RTL from specifications, automated specification generation remains underexplored, largely due to the lack of reliable evaluation methods. To address this gap, we investigate how prompting strategies affect RTL-to-specification quality and introduce metrics for faithfully evaluating generated specs. We also benchmark open-source and commercial LLMs, providing a foundation for more automated and efficient specification workflows in IC design.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.