Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DFI: An Interprocedural Value-Flow Analysis Framework that Scales to Large Codebases (2209.02638v1)

Published 6 Sep 2022 in cs.PL and cs.SE

Abstract: Context- and flow-sensitive value-flow information is an important building block for many static analysis tools. Unfortunately, current approaches to compute value-flows do not scale to large codebases, due to high memory and runtime requirements. This paper proposes a new scalable approach to compute value-flows via graph reachability. To this end, we develop a new graph structure as an extension of LLVM IR that contains two additional operations which significantly simplify the modeling of pointer aliasing. Further, by processing nodes in the opposite direction of SSA def-use chains, we are able to minimize the tree width of the resulting graph. This allows us to employ efficient tree traversal algorithms in order to resolve graph reachability. We present a value-flow analysis framework,DFI, implementing our approach. We compare DFI against two state-of-the-art value-flow analysis frameworks, Phasar and SVF, to extract value-flows from 4 real-world software projects. Given 32GB of memory, Phasar and SVF are unable to complete analysis of larger projects such as OpenSSL or FFmpeg, while DFI is able to complete all evaluations. For the subset of benchmarks that Phasar and SVF do handle, DFI requires significantly less memory (1.5% of Phasar's, 6.4% of SVF's memory footprint on average) and runs significantly faster (23x speedup over Phasar, 57x compared to SVF). Our analysis shows that, in contrast to previous approaches, DFI's memory and runtime requirements scale almost linearly with the number of analyzed instructions.

Citations (2)

Summary

We haven't generated a summary for this paper yet.