Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities (1708.08437v1)

Published 28 Aug 2017 in cs.CR

Abstract: Algorithmic complexity vulnerabilities occur when the worst-case time/space complexity of an application is significantly higher than the respective average case for particular user-controlled inputs. When such conditions are met, an attacker can launch Denial-of-Service attacks against a vulnerable application by providing inputs that trigger the worst-case behavior. Such attacks have been known to have serious effects on production systems, take down entire websites, or lead to bypasses of Web Application Firewalls. Unfortunately, existing detection mechanisms for algorithmic complexity vulnerabilities are domain-specific and often require significant manual effort. In this paper, we design, implement, and evaluate SlowFuzz, a domain-independent framework for automatically finding algorithmic complexity vulnerabilities. SlowFuzz automatically finds inputs that trigger worst-case algorithmic behavior in the tested binary. SlowFuzz uses resource-usage-guided evolutionary search techniques to automatically find inputs that maximize computational resource utilization for a given application.

Citations (184)

Summary

  • The paper introduces SlowFuzz, a domain-independent framework using resource-usage guidance and evolutionary search to automate the detection of algorithmic complexity vulnerabilities.
  • SlowFuzz achieved significant slowdowns in various applications, including a 300x slowdown in bzip2, demonstrating superior effectiveness compared to coverage-based testing.
  • The SlowFuzz methodology suggests potential for detecting other resource-based vulnerabilities, with future work focusing on integrating static analysis and improving instrumentation.

Analysis of SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities

The paper in question introduces SlowFuzz, a framework designed to detect algorithmic complexity vulnerabilities efficiently. The significance of this work lies in its domain-independent approach, which contrasts with existing detection mechanisms that are highly domain-specific and demand substantial manual intervention.

Algorithmic complexity vulnerabilities occur when certain user-controlled inputs lead to much higher computational resource utilization than usual, allowing potential attackers to exploit these inputs and cause Denial-of-Service (DoS) attacks. Traditional ways to detect these vulnerabilities are not scalable across domains due to their specificity, necessitating methods like SlowFuzz that can generalize across applications.

The methodology of SlowFuzz is centered around resource usage guidance to identify inputs that cause worst-case computational behavior. The tool employs an evolutionary search technique designed specifically to maximize resource utilization like instruction execution and memory consumption. The key novelty here is SlowFuzz’s ability to autonomously find high-impact inputs without relying on domain-specific knowledge.

This paper provides substantial results showing SlowFuzz's effectiveness across both well-understood algorithms and real-world applications. For example, in sorting algorithms such as quicksort, SlowFuzz successfully generates inputs that reach near theoretical worst-case executions: a 5.12x slowdown for a simple version of quicksort. Similarly, for various real-world applications, SlowFuzz managed to significantly impact performance. It induced a 300-times slowdown in the bzip2 decompression routine, revealed exponential-time regular expressions in the PCRE library, and identified high-collision inputs for PHP’s default hashtable implementation.

Further analysis of SlowFuzz involves a comparison with coverage-based methods, revealing that a resource usage-driven approach provides superior results, doubling efficacy compared to coverage-based search. This highlights the tool’s focus on functionality—identifying complexity vulnerabilities—over simply expanding execution paths. The evolutionary search engendered by SlowFuzz, backed by a fitness function focusing on computational resource metrics, enables more targeted exploitation of vulnerabilities than traditional fuzzers.

Besides technical merit, the broader theoretical implications rest on SlowFuzz’s approach that could pave the way for finding other resource-based vulnerabilities through similar optimization-focused techniques. The search methodology's abstraction can hypothetically be extended to tackle different facets of resource exploitation, encompassing areas like energy consumption and network bandwidth management.

Despite its achievements, the paper acknowledges limitations and areas for future enhancement, such as integrating static analysis for a more effective balance of dynamic testing and to further guide mutation operations. Instrumentation accuracy still relies largely on SanitizerCoverage’s eight-bit counters; therefore, employing more advanced profiling methods could enhance precision and effectiveness.

In conclusion, SlowFuzz is an important advancement in the field of software security, offering a robust, flexible approach to detecting algorithmic complexity vulnerabilities with broad applicability. It demonstrates the potential of leveraging evolutionary approaches to prepare for extensive fuzz testing across domains, laying groundwork for both immediate practical applications and long-term theoretical developments in managing computational resource misuse.