Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gem5Pred: Predictive Approaches For Gem5 Simulation Time (2310.06290v1)

Published 10 Oct 2023 in cs.AR and cs.LG

Abstract: Gem5, an open-source, flexible, and cost-effective simulator, is widely recognized and utilized in both academic and industry fields for hardware simulation. However, the typically time-consuming nature of simulating programs on Gem5 underscores the need for a predictive model that can estimate simulation time. As of now, no such dataset or model exists. In response to this gap, this paper makes a novel contribution by introducing a unique dataset specifically created for this purpose. We also conducted analysis of the effects of different instruction types on the simulation time in Gem5. After this, we employ three distinct models leveraging CodeBERT to execute the prediction task based on the developed dataset. Our superior regression model achieves a Mean Absolute Error (MAE) of 0.546, while our top-performing classification model records an Accuracy of 0.696. Our models establish a foundation for future investigations on this topic, serving as benchmarks against which subsequent models can be compared. We hope that our contribution can simulate further research in this field. The dataset we used is available at https://github.com/XueyangLiOSU/Gem5Pred.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. The gem5 simulator. SIGARCH Comput. Archit. News, 39(2):1–7, aug 2011.
  2. The m5 simulator: Modeling networked systems. Ann Arbor, 1001:48109–2121, 2008.
  3. Multifacet’s general execution-driven multiprocessor simulator (gems) toolset. ACM SIGARCH Computer Architecture News, 33(4):92–99, 2005.
  4. The gem5 simulator. ACM SIGARCH computer architecture news, 39(2):1–7, 2011.
  5. Accuracy evaluation of gem5 simulator system. In 7th International workshop on reconfigurable and communication-centric systems-on-chip (ReCoSoC), pages 1–7. IEEE, 2012.
  6. Gem5-x: A gem5-based system level simulation framework to optimize many-core platforms. In 2019 Spring Simulation Conference (SpringSim), pages 1–12. IEEE, 2019.
  7. Dissecting contextual word embeddings: Architecture and representation. arXiv preprint arXiv:1808.08949, 2018.
  8. Improving language understanding by generative pre-training. 2018.
  9. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  10. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
  11. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155, 2020.
  12. A tutorial on support vector regression. Statistics and computing, 14:199–222, 2004.
  13. Support vector regression. In Machine learning, pages 123–140. Elsevier, 2020.
  14. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pages 144–152, 1992.
  15. Support vector machines and kernels for computational biology. PLoS computational biology, 4(10):e1000173, 2008.
  16. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2020.
  17. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  18. Mcpat: An integrated power, area, and timing modeling framework for multicore and manycore architectures. In Proceedings of the 42nd annual ieee/acm international symposium on microarchitecture, pages 469–480, 2009.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tian Yan (5 papers)
  2. Xueyang Li (17 papers)
  3. Sifat Ut Taki (8 papers)
  4. Saeid Mehrdad (3 papers)

Summary

We haven't generated a summary for this paper yet.