Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing (1807.10875v1)

Published 28 Jul 2018 in stat.ML and cs.LG

Abstract: Machine learning models are notoriously difficult to interpret and debug. This is particularly true of neural networks. In this work, we introduce automated software testing techniques for neural networks that are well-suited to discovering errors which occur only for rare inputs. Specifically, we develop coverage-guided fuzzing (CGF) methods for neural networks. In CGF, random mutations of inputs to a neural network are guided by a coverage metric toward the goal of satisfying user-specified constraints. We describe how fast approximate nearest neighbor algorithms can provide this coverage metric. We then discuss the application of CGF to the following goals: finding numerical errors in trained neural networks, generating disagreements between neural networks and quantized versions of those networks, and surfacing undesirable behavior in character level LLMs. Finally, we release an open source library called TensorFuzz that implements the described techniques.

Citations (305)

Summary

  • The paper introduces a novel approach that adapts coverage-guided fuzzing to neural networks by measuring semantic coverage through neural activations.
  • It presents TensorFuzz, an open-source tool that successfully identifies numerical instabilities and discrepancies between full precision and quantized models.
  • Empirical results on language models demonstrate TensorFuzz’s ability to reveal production-centric issues, signaling its potential for safer AI deployments.

Coverage-Guided Fuzzing for Neural Network Debugging: An Analysis of TensorFuzz

The paper "TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing" addresses the pervasive challenge of debugging neural networks by adapting coverage-guided fuzzing (CGF), commonly used in traditional software testing, to the domain of neural networks. Neural networks' lack of transparency and the difficulty of formal verification, especially given their pervasive use in critical applications, necessitates robust methods for identifying rare input-induced errors. This paper presents an innovative approach to this challenge by leveraging CGF to uncover errors, quantify differences between models, and highlight undesirable behaviors in LLMs using a tool named TensorFuzz.

Core Contributions

The paper articulates several contributions that emphasize its methodological and practical implications:

  1. Adaptation of CGF for Neural Networks: The introduction of CGF for neural networks stands as the primary contribution. Instead of employing traditional code coverage metrics, this approach utilizes fast approximate nearest neighbor algorithms to establish a measure of semantic coverage based on neural activations, which provides the capability to surface rare errors pertinent to model inputs.
  2. Open Source Tool - TensorFuzz: The implementation of these techniques into an open-source library, TensorFuzz, facilitates broader usage and further development by the research community.
  3. Empirical Validation: The paper demonstrates TensorFuzz's functionality through various experiments: identifying numerical inconsistencies in trained networks, discerning discrepancies between full precision and quantized models, and highlighting repetitive or undesirable outputs in character-level LLMs.

Detailed Examination and Results

The experimentation section illustrates TensorFuzz's potential in various scenarios:

  • Numerical Errors: TensorFuzz effectively identifies inputs that cause trained models to produce non-finite outputs, such as NaNs, presenting an advanced capacity over random search methodologies, particularly in floating-point arithmetic contexts.
  • Quantized Models: Testing between models and their quantized versions revealed discrepancies unidentifiable through benchmark dataset evaluations alone, demonstrating TensorFuzz's capability in reliability evaluation for model optimizations like quantization.
  • LLM Behavior: By fuzzing character-level RNNs, TensorFuzz surfaced repetitive or blacklisted words, illustrating its effectiveness in spotting production-time issues unseen during training.

Implications and Future Directions

The proposal of CGF for neural networks indicates a significant shift towards more rigorous testing methodologies akin to traditional SE techniques. The CGF method, along with TensorFuzz, could potentially be instrumental in developing safer AI systems, especially as models become further integrated into safety-critical environments. Future research may focus on optimizing coverage metrics, diversifying the types of neural networks subjectable to fuzzing, and extending tensor operations to leverage multi-GPU configurations for greater computational efficiency.

TensorFuzz establishes a foundation upon which more refined fuzzing techniques can be built, paving the way for more robust and fail-safe neural network applications. As neural networks continue to evolve, so too must our methods for ensuring their safety and reliability. This paper represents a crucial step in that direction, advocating not only for adaptation but for the adoption of more rigorous testing and verification standards in neural network deployment.