Hierarchical Source-to-Post-Route QoR Prediction in High-Level Synthesis with GNNs (2401.08696v1)
Abstract: High-level synthesis (HLS) notably speeds up the hardware design process by avoiding RTL programming. However, the turnaround time of HLS increases significantly when post-route quality of results (QoR) are considered during optimization. To tackle this issue, we propose a hierarchical post-route QoR prediction approach for FPGA HLS, which features: (1) a modeling flow that directly estimates latency and post-route resource usage from C/C++ programs; (2) a graph construction method that effectively represents the control and data flow graph of source code and effects of HLS pragmas; and (3) a hierarchical GNN training and prediction method capable of capturing the impact of loop hierarchies. Experimental results show that our method presents a prediction error of less than 10% for different types of QoR metrics, which gains tremendous improvement compared with the state-of-the-art GNN methods. By adopting our proposed methodology, the runtime for design space exploration in HLS is shortened to tens of minutes and the achieved ADRS is reduced to 6.91% on average.
- J. Zhao et al., “Comba: A comprehensive model-based analysis framework for high level synthesis of real applications,” in ICCAD, 2017.
- Y.-k. Choi et al., “Hlscope+,: Fast and accurate performance estimation for fpga hls,” in ICCAD, 2017.
- G. Zhong et al., “Design space exploration of fpga-based accelerators with multi-level parallelism,” in DATE, 2017, pp. 1141–1146.
- S. Dai et al., “Fast and accurate estimation of quality of results in high-level synthesis with machine learning,” in FCCM, 2018.
- H. M. Makrani et al., “Pyramid: Machine learning framework to estimate the optimal timing and resource usage of a high-level synthesis design,” in FPL. IEEE, 2019.
- A. Sohrabizadeh et al., “Automated accelerator optimization aided by graph neural networks,” in DAC, 2022.
- L. Ferretti et al., “Graph neural networks for high-level synthesis design space exploration,” ACM TODAES, vol. 28, no. 2, 2022.
- N. Wu et al., “High-level synthesis performance prediction using gnns: Benchmarking, modeling, and advancing,” in DAC, 2022.
- Cummins et al., “ProGraML: A Graph-based Program Representation for Data Flow Analysis and Compiler Optimizations,” in ICML, 2021.
- J. Zhao et al., “Performance modeling and directives optimization for high-level synthesis on fpga,” IEEE TCAD, vol. 39, no. 7, 2020.
- T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
- P. Veličković et al., “Graph attention networks,” in ICLR, 2018.
- W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in NIPS, 2017.
- Y. Shi et al., “Masked label prediction: Unified message passing model for semi-supervised classification,” arXiv preprint, 2020.
- G. Corso et al., “Principal neighbourhood aggregation for graph nets,” in NeurIPS, 2020.