Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MTFuzz: Fuzzing with a Multi-Task Neural Network (2005.12392v2)

Published 25 May 2020 in cs.SE

Abstract: Fuzzing is a widely used technique for detecting software bugs and vulnerabilities. Most popular fuzzers generate new inputs using an evolutionary search to maximize code coverage. Essentially, these fuzzers start with a set of seed inputs, mutate them to generate new inputs, and identify the promising inputs using an evolutionary fitness function for further mutation. Despite their success, evolutionary fuzzers tend to get stuck in long sequences of unproductive mutations. In recent years, ML based mutation strategies have reported promising results. However, the existing ML-based fuzzers are limited by the lack of quality and diversity of the training data. As the input space of the target programs is high dimensional and sparse, it is prohibitively expensive to collect many diverse samples demonstrating successful and unsuccessful mutations to train the model. In this paper, we address these issues by using a Multi-Task Neural Network that can learn a compact embedding of the input space based on diverse training samples for multiple related tasks (i.e., predicting for different types of coverage). The compact embedding can guide the mutation process by focusing most of the mutations on the parts of the embedding where the gradient is high. \tool uncovers $11$ previously unseen bugs and achieves an average of $2\times$ more edge coverage compared with 5 state-of-the-art fuzzer on 10 real-world programs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dongdong She (14 papers)
  2. Rahul Krishna (28 papers)
  3. Lu Yan (14 papers)
  4. Suman Jana (50 papers)
  5. Baishakhi Ray (88 papers)
Citations (52)

Summary

We haven't generated a summary for this paper yet.