EasyView: Bringing Performance Profiles into Integrated Development Environments (2312.16598v1)
Abstract: Dynamic program analysis (also known as profiling) is well-known for its powerful capabilities of identifying performance inefficiencies in software packages. Although a large number of dynamic program analysis techniques are developed in academia and industry, very few of them are widely used by software developers in their regular software developing activities. There are three major reasons. First, the dynamic analysis tools (also known as profilers) are disjoint from the coding environments such as IDEs and editors; frequently switching focus between them significantly complicates the entire cycle of software development. Second, mastering various tools to interpret their analysis results requires substantial efforts; even worse, many tools have their own design of graphical user interfaces (GUI) for data presentation, which steepens the learning curves. Third, most existing tools expose few interfaces to support user-defined analysis, which makes the tools less customizable to fulfill diverse user demands. We develop EasyView, a general solution to integrate the interpretation and visualization of various profiling results in the coding environments, which bridges software developers with profilers to provide easy and intuitive dynamic analysis during the code development cycle. The novelty of EasyView is three-fold. First, we develop a generic data format, which enables EasyView to support mainstream profilers for different languages. Second, we develop a set of customizable schemes to analyze and visualize the profiles in intuitive ways. Third, we tightly integrate EasyView with popular coding environments, such as Microsoft Visual Studio Code, with easy code exploration and user interaction. Our evaluation shows that EasyView is able to support various profilers for different languages and provide unique insights into performance inefficiencies in different domains.
- S. J. Deitz, B. L. Chamberlain, and L. Snyder, “Eliminating Redundancies in Sum-of-product Array Computations,” in Proceedings of the 15th International Conference on Supercomputing, ser. ICS ’01. New York, NY, USA: ACM, 2001, pp. 65–77.
- B. K. Rosen, M. N. Wegman, and F. K. Zadeck, “Global Value Numbers and Redundant Computations,” in Proceedings of the 15th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, 1988, pp. 12–27.
- M. N. Wegman and F. K. Zadeck, “Constant Propagation with Conditional Branches,” ACM Trans. Program. Lang. Syst., vol. 13, no. 2, pp. 181–210, Apr 1991.
- L. Adhianto, S. Banerjee, M. Fagan, M. Krentel, G. Marin, J. Mellor-Crummey, and N. R. Tallent, “HPCToolkit: Tools for Performance Analysis of Optimized Parallel Programs,” Concurrency Computation : Practice Expererience, vol. 22, no. 6, pp. 685–701, Apr 2010.
- “Intel VTune,” https://software.intel.com/en-us/intel-vtune-amplifier-xe, 2018.
- S. L. Graham, P. B. Kessler, and M. K. Mckusick, “Gprof: A Call Graph Execution Profiler,” in Proceedings of the 1982 SIGPLAN Symposium on Compiler Construction, ser. SIGPLAN ’82. New York, NY, USA: ACM, 1982, pp. 120–126.
- G. Inc., “PProf,” https://github.com/google/pprof.
- Linux, “Linux perf tool,” https://perf.wiki.kernel.org/index.php/Main_Page, 2015.
- A. J. Ko, B. A. Myers, M. J. Coblenz, and H. H. Aung, “An exploratory study of how developers seek, relate, and collect relevant information during software maintenance tasks,” IEEE Trans. Softw. Eng., vol. 32, no. 12, p. 971–987, Dec. 2006. [Online]. Available: https://doi.org/10.1109/TSE.2006.116
- C. Parnin and S. Rugaber, “Resumption strategies for interrupted programming tasks,” in Software Quality Journal, 2011.
- D. Saff and M. D. Ernst, “Reducing wasted development time via continuous testing,” in Proceedings of the 14th International Symposium on Software Reliability Engineering, ser. ISSRE ’03. USA: IEEE Computer Society, 2003, p. 281.
- L. Luo, J. Dolby, and E. Bodden, “MagpieBridge: A General Approach to Integrating Static Analyses into IDEs and Editors (Tool Insights Paper),” in 33rd European Conference on Object-Oriented Programming (ECOOP 2019), ser. Leibniz International Proceedings in Informatics (LIPIcs), A. F. Donaldson, Ed., vol. 134. Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2019, pp. 21:1–21:25. [Online]. Available: http://drops.dagstuhl.de/opus/volltexte/2019/10813
- I. Inc., “IBM AppScan,” https://www.ibm.com/security.
- “The Python Profilers,” https://docs.python.org/3/library/profile.html.
- M. Inc., “Language Server Protocol,” https://microsoft.github.io/language-server-protocol/.
- “async-profiler,” https://github.com/jvm-profiling-tools/async-profiler.
- U. of Oregon, “TAU: Tuning and Analysis Utilities,” https://www.cs.uoregon.edu/research/tau/home.php.
- “Dynatrace,” https://www.dynatrace.com.
- M. Inc., “Visual Studio,” https://visualstudio.microsoft.com.
- J. s.r.o., “JetBrains,” https://www.jetbrains.com.
- E. Foundation, “Eclipse,” https://www.eclipse.org.
- F. Beck, O. Moseler, S. Diehl, and G. D. Rey, “In situ understanding of performance bottlenecks through visually augmented code,” in Proceedings of the 21st International Conference on Program Comprehension, 2013, pp. 63–72.
- M. Inc., “Visual Studio Code,” https://code.visualstudio.com.
- P. Su, S. Wen, H. Yang, M. Chabbi, and X. Liu, “Redundant loads: A software inefficiency indicator,” in 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), May 2019.
- P. Su, Q. Wang, M. Chabbi, and X. Liu, “Pinpointing performance inefficiencies in java,” in Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ser. ESEC/FSE 2019. New York, NY, USA: Association for Computing Machinery, 2019, p. 818–829. [Online]. Available: https://doi.org/10.1145/3338906.3338923
- N. Nethercote and J. Seward, “Valgrind: A framework for heavyweight dynamic binary instrumentation,” in Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation, ser. PLDI ’07. New York, NY, USA: Association for Computing Machinery, 2007, p. 89–100. [Online]. Available: https://doi.org/10.1145/1250734.1250746
- E. D. Berger, “Scalene: Scripting-language aware profiling for python,” arXiv preprint arXiv:2006.03879, 2020.
- “Basic Syntax of Markdown Languages,” https://www.markdownguide.org/basic-syntax/.
- “Hotspot - the Linux perf GUI for performance analysis,” https://github.com/KDAB/hotspot, 2022.
- “Google Cloud Profiler,” https://cloud.google.com/profiler.
- “Perffeto: System profiling, app tracing and trace analysis,” https://perfetto.dev.
- “SpeedScope,” https://www.speedscope.app.
- “Pyroscope: Open Source Continuous Profiling Platform,” https://pyroscope.io.
- “gProfiler,” https://gprofiler.io/.
- “Flamescope,” https://github.com/Netflix/flamescope.
- “Profiler in IntelliJ,” https://www.jetbrains.com/help/idea/cpu-profiler.html.
- “Profiler in CLion,” https://www.jetbrains.com/help/clion/cpu-profiler.html.
- “Profiler in Goland,” https://www.jetbrains.com/help/go/profiling-tests-and-benchmarks.html.
- “Using Intel VTune Profiler Server with Visual Studio Code and Intel DevCloud for oneAPI,” https://www.intel.com/content/www/us/en/develop/documentation/vtune-cookbook/top/configuration-recipes/using-vtune-server-with-vs-code-intel-devcloud.html.
- https://marketplace.visualstudio.com/items?itemName=MaxMedia.go-prof.
- “Austin VS Code Extension,” https://github.com/p403n1x87/austin-vscode.
- J. Cito, P. Leitner, C. Bosshard, M. Knecht, G. Mazlami, and H. C. Gall, “Performancehat: Augmenting source code with runtime performance traces in the ide,” in Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings, ser. ICSE ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 41–44. [Online]. Available: https://doi.org/10.1145/3183440.3183481
- T. Lieber, J. R. Brandt, and R. C. Miller, “Addressing misconceptions about code with always-on programming visualizations,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ser. CHI ’14. New York, NY, USA: Association for Computing Machinery, 2014, p. 2481–2490. [Online]. Available: https://doi.org/10.1145/2556288.2557409
- A. Knüpfer, C. Rössel, D. a. Mey, S. Biersdorff, K. Diethelm, D. Eschweiler, M. Geimer, M. Gerndt, D. Lorenz, A. Malony, W. E. Nagel, Y. Oleynik, P. Philippen, P. Saviankou, D. Schmidl, S. Shende, R. Tschüter, M. Wagner, B. Wesarg, and F. Wolf, “Score-p: A joint performance measurement run-time infrastructure for periscope,scalasca, tau, and vampir,” in Tools for High Performance Computing 2011, H. Brunst, M. S. Müller, W. E. Nagel, and M. M. Resch, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 79–91.
- D. Boehme, T. Gamblin, D. Beckingsale, P.-T. Bremer, A. Gimenez, M. LeGendre, O. Pearce, and M. Schulz, “Caliper: Performance introspection for hpc software stacks,” in SC ’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2016, pp. 550–560.
- L. Limited, “Linaro MAP,” https://www.linaroforge.com/linaroMap.
- A. Inc., “AMD uProf,” https://developer.amd.com/amd-uprof.
- N. Inc., “NVIDIA Nsight Compute,” https://developer.nvidia.com/nsight-compute.
- M. Chabbi, X. Liu, and J. Mellor-Crummey, “Call paths for pin tools,” in Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization, ser. CGO ’14. New York, NY, USA: Association for Computing Machinery, 2014, p. 76–86. [Online]. Available: https://doi.org/10.1145/2581122.2544164
- Q. Zhao, X. Liu, and M. Chabbi, “Drcctprof: A fine-grained call path profiler for arm-based clusters,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, ser. SC ’20. IEEE Press, 2020.
- C.-K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. J. Reddi, and K. Hazelwood, “Pin: Building customized program analysis tools with dynamic instrumentation,” in Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, ser. PLDI ’05. New York, NY, USA: ACM, 2005, pp. 190–200.
- O. Villa, M. Stephenson, D. Nellans, and S. W. Keckler, “Nvbit: A dynamic binary instrumentation framework for nvidia gpus,” in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO ’52. New York, NY, USA: Association for Computing Machinery, 2019, p. 372–383. [Online]. Available: https://doi.org/10.1145/3352460.3358307
- G. Inc., “Protocol Buffers,” https://developers.google.com/protocol-buffers.
- X. Liu and B. Wu, “Scaanalyzer: A tool to identify memory scalability bottlenecks in parallel programs,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, ser. SC ’15. New York, NY, USA: ACM, 2015, pp. 47:1–47:12. [Online]. Available: http://doi.acm.org/10.1145/2807591.2807648
- T. Liu and X. Liu, “Cheetah: Detecting false sharing efficiently and effectively,” in Proceedings of the 2016 International Symposium on Code Generation and Optimization, ser. CGO ’16. New York, NY, USA: ACM, 2016, pp. 1–11. [Online]. Available: http://doi.acm.org/10.1145/2854038.2854039
- R. Lachaize, B. Lepers, and V. Quéma, “MemProf: A memory profiler for NUMA multicore systems,” in USENIX ATC, 2012.
- G. Marin and J. Mellor-Crummey, “Pinpointing and Exploiting Opportunities for Enhancing Data Reuse,” in IEEE Intl. Symposium on Performance Analysis of Systems and Software, ser. ISPASS ’08. Washington, DC, USA: IEEE Computer Society, 2008, pp. 115–126.
- S. Wen, M. Chabbi, and X. Liu, “Redspy: Exploring value locality in software,” in Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS ’17. New York, NY, USA: ACM, 2017, pp. 47–61.
- S. Wen, X. Liu, J. Byrne, and M. Chabbi, “Watching for software inefficiencies with witch,” in Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS ’18. New York, NY, USA: ACM, 2018, pp. 332–347.
- M. Chabbi, S. Wen, and X. Liu, “Featherlight on-the-fly false-sharing detection,” in Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, ser. PPoPP ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 152–167. [Online]. Available: https://doi.org/10.1145/3178487.3178499
- “Chrome Performance Profiler,” https://developer.chrome.com/docs/devtools/.
- “pyinstrument,” https://pyinstrument.readthedocs.io/en/latest/, 2022.
- “Wasmer Python,” https://github.com/wasmerio/wasmer-python.
- G. Brendan, “Flame Graphs,” https://www.brendangregg.com/flamegraphs.html.
- M. Zaharia, R. S. Xin, P. Wendell, T. Das, M. Armbrust, A. Dave, X. Meng, J. Rosen, S. Venkataraman, M. J. Franklin, A. Ghodsi, J. Gonzalez, S. Shenker, and I. Stoica, “Apache spark: A unified engine for big data processing,” Commun. ACM, vol. 59, no. 11, p. 56–65, oct 2016. [Online]. Available: https://doi.org/10.1145/2934664
- “Spark-Bench,” https://codait.github.io/spark-bench/.
- “Spark RDD,” https://data-flair.training/blogs/spark-rdd-tutorial/.
- “Spark Dataset,” https://data-flair.training/blogs/apache-spark-dataset-tutorial/.
- “IntelliJ Platform SDK,” https://plugins.jetbrains.com/docs/intellij/welcome.html.
- “A popular RPC test suite,” https://github.com/rpcxio/rpcx-benchmark.
- “gRPC,” https://grpc.io.
- “Finding and fixing memory leaks in Go,” https://dev.to/googlecloud/finding-and-fixing-memory-leaks-in-go-1k1h.
- I. Karlin, A. Bhatele, J. Keasler, B. L. Chamberlain, J. Cohen, Z. DeVito, R. Haque, D. Laney, E. Luke, F. Wang, D. Richards, M. Schulz, and C. Still, “Exploring traditional and emerging parallel programming models using a proxy application,” in 27th IEEE International Parallel & Distributed Processing Symposium (IEEE IPDPS 2013), Boston, USA, May 2013.
- “TCMalloc,” https://github.com/google/tcmalloc.
- “EasyView Artifact,” https://doi.org/10.5281/zenodo.10305415.
- “Remote Development,” https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack.
- Qidong Zhao (5 papers)
- Milind Chabbi (12 papers)
- Xu Liu (213 papers)