Papers
Topics
Authors
Recent
2000 character limit reached

A Detailed Historical and Statistical Analysis of the Influence of Hardware Artifacts on SPEC Integer Benchmark Performance

Published 30 Jan 2024 in cs.CY and stat.AP | (2401.16690v1)

Abstract: The Standard Performance Evaluation Corporation (SPEC) CPU benchmark has been widely used as a measure of computing performance for decades. The SPEC is an industry-standardized, CPU-intensive benchmark suite and the collective data provide a proxy for the history of worldwide CPU and system performance. Past efforts have not provided or enabled answers to questions such as, how has the SPEC benchmark suite evolved empirically over time and what micro-architecture artifacts have had the most influence on performance? -- have any micro-benchmarks within the suite had undue influence on the results and comparisons among the codes? -- can the answers to these questions provide insights to the future of computer system performance? To answer these questions, we detail our historical and statistical analysis of specific hardware artifacts (clock frequencies, core counts, etc.) on the performance of the SPEC benchmarks since 1995. We discuss in detail several methods to normalize across benchmark evolutions. We perform both isolated and collective sensitivity analyses for various hardware artifacts and we identify one benchmark (libquantum) that had somewhat undue influence on performance outcomes. We also present the use of SPEC data to predict future performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. J. L. Henning, “SPEC CPU suite growth: an historical perspective,” ACM SIGARCH Computer Architecture News, vol. 35, no. 1, pp. 65–68, 2007.
  2. R. Panda, S. Song, J. Dean, and L. K. John, “Wait of a decade: Did SPEC CPU 2017 broaden the performance horizon?” in 2018 IEEE International Symposium on High Performance Computer Architecture, ser. HPCA ’18.   Washington, DC: IEEE Computer Society, 2018, pp. 271–282.
  3. H. Vandierendonck and K. De Bosschere, “Many benchmarks stress the same bottlenecks,” in Workshop on Computer Architecture Evaluation Using Commercial Workloads, ser. CAECW ’04.   Washington, DC: IEEE Computer Society, 2004, pp. 57–64.
  4. A. Phansalkar, A. Joshi, L. Eeckhout, and L. John, “Measuring program similarity: Experiments with SPEC CPU benchmark suites,” in IEEE International Symposium on Performance Analysis of Systems and Software, 2005, ser. ISPASS ’05.   Washington, DC: IEEE Computer Society, 2005, pp. 10–20.
  5. A. Phansalkar, A. Joshi, and L. K. John, “Analysis of redundancy and application balance in the SPEC CPU2006 benchmark suite,” in Proceedings of the 34th Annual International Symposium on Computer Architecture, ser. ISCA ’07.   New York, NY: Association for Computing Machinery, 2007, pp. 412–423.
  6. ——, “Subsetting the SPEC CPU2006 benchmark suite,” SIGARCH Comput. Archit. News, vol. 35, no. 1, pp. 69––76, 2007.
  7. A. Danowitz, K. Kelley, J. Mao, J. P. Stevenson, and M. Horowitz, “CPU DB: Recording microprocessor history: With this open database, you can mine microprocessor trends over the past 40 years.” ACM Queue, vol. 10, p. 10–27, 2012.
  8. S. L. Furman, “iLORE: Discovering a lineage of microprocessors,” 2021, Master’s Thesis, Virginia Tech. [Online]. Available: http://hdl.handle.net/10919/104071
  9. SPEC, “speccpu benchmarks,” https://www.spec.org/benchmarks.html, 2021.
  10. J. J. Dujmovic and I. Dujmovic, “Evolution and evaluation of SPEC benchmarks,” SIGMETRICS Perform. Eval. Rev., vol. 26, no. 3, pp. 2––9, 1998.
  11. N. R. Hardy, “A data schema for aggregating disparate sources of computer system and benchmark information,” 2021, Master’s Thesis, Virginia Tech. [Online]. Available: http://hdl.handle.net/10919/103707
  12. B. Soup, “Beautiful Soup,” https://www.crummy.com/software/BeautifulSoup/bs4/doc/, 2021.
  13. Pandas, “Pandas,” https://pandas.pydata.org/, 2021.
  14. TOP500, “Top500 lists,” https://www.top500.org/, 2021.
  15. Green500, “Green500 lists,” https://www.top500.org/lists/green500/, 2023.
  16. AMD, “AMD product specifications,” https://www.amd.com/en/products/specifications/processors, 2021.
  17. Intel, “Intel product specifications,” https://ark.intel.com/content/www/us/en/ark.html.
  18. A. Danowitz, “Stanford CPU DB,” http://cpudb.stanford.edu/, 2014.
  19. G. Chatzopoulos, A. Dragojević, and R. Guerraoui, “Estima: Extrapolating scalability of in-memory applications,” in Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2016, pp. 1–11.
  20. R. B. Gramacy, “laGP: Large-scale spatial modeling via local approximate Gaussian processes in R,” Journal of Statistical Software, vol. 72, no. 1, pp. 1–46, 2016, doi: 10.18637/jss.v072.i01.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.