Papers
Topics
Authors
Recent
2000 character limit reached

Hierarchical Source-to-Post-Route QoR Prediction in High-Level Synthesis with GNNs

Published 14 Jan 2024 in cs.AR, cs.AI, and cs.LG | (2401.08696v1)

Abstract: High-level synthesis (HLS) notably speeds up the hardware design process by avoiding RTL programming. However, the turnaround time of HLS increases significantly when post-route quality of results (QoR) are considered during optimization. To tackle this issue, we propose a hierarchical post-route QoR prediction approach for FPGA HLS, which features: (1) a modeling flow that directly estimates latency and post-route resource usage from C/C++ programs; (2) a graph construction method that effectively represents the control and data flow graph of source code and effects of HLS pragmas; and (3) a hierarchical GNN training and prediction method capable of capturing the impact of loop hierarchies. Experimental results show that our method presents a prediction error of less than 10% for different types of QoR metrics, which gains tremendous improvement compared with the state-of-the-art GNN methods. By adopting our proposed methodology, the runtime for design space exploration in HLS is shortened to tens of minutes and the achieved ADRS is reduced to 6.91% on average.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. J. Zhao et al., “Comba: A comprehensive model-based analysis framework for high level synthesis of real applications,” in ICCAD, 2017.
  2. Y.-k. Choi et al., “Hlscope+,: Fast and accurate performance estimation for fpga hls,” in ICCAD, 2017.
  3. G. Zhong et al., “Design space exploration of fpga-based accelerators with multi-level parallelism,” in DATE, 2017, pp. 1141–1146.
  4. S. Dai et al., “Fast and accurate estimation of quality of results in high-level synthesis with machine learning,” in FCCM, 2018.
  5. H. M. Makrani et al., “Pyramid: Machine learning framework to estimate the optimal timing and resource usage of a high-level synthesis design,” in FPL.   IEEE, 2019.
  6. A. Sohrabizadeh et al., “Automated accelerator optimization aided by graph neural networks,” in DAC, 2022.
  7. L. Ferretti et al., “Graph neural networks for high-level synthesis design space exploration,” ACM TODAES, vol. 28, no. 2, 2022.
  8. N. Wu et al., “High-level synthesis performance prediction using gnns: Benchmarking, modeling, and advancing,” in DAC, 2022.
  9. Cummins et al., “ProGraML: A Graph-based Program Representation for Data Flow Analysis and Compiler Optimizations,” in ICML, 2021.
  10. J. Zhao et al., “Performance modeling and directives optimization for high-level synthesis on fpga,” IEEE TCAD, vol. 39, no. 7, 2020.
  11. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
  12. P. Veličković et al., “Graph attention networks,” in ICLR, 2018.
  13. W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in NIPS, 2017.
  14. Y. Shi et al., “Masked label prediction: Unified message passing model for semi-supervised classification,” arXiv preprint, 2020.
  15. G. Corso et al., “Principal neighbourhood aggregation for graph nets,” in NeurIPS, 2020.
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (4)

Collections

Sign up for free to add this paper to one or more collections.