Papers
Topics
Authors
Recent
2000 character limit reached

Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications

Published 30 Jun 2020 in cs.LG and stat.ML | (2007.00147v1)

Abstract: Recent work has shown that it is possible to learn neural networks with provable guarantees on the output of the model when subject to input perturbations, however these works have focused primarily on defending against adversarial examples for image classifiers. In this paper, we study how these provable guarantees can be naturally applied to other real world settings, namely getting performance specifications for robust virtual sensors measuring fuel injection quantities within an engine. We first demonstrate that, in this setting, even simple neural network models are highly susceptible to reasonable levels of adversarial sensor noise, which are capable of increasing the mean relative error of a standard neural network from 6.6% to 43.8%. We then leverage methods for learning provably robust networks and verifying robustness properties, resulting in a robust model which we can provably guarantee has at most 16.5% mean relative error under any sensor noise. Additionally, we show how specific intervals of fuel injection quantities can be targeted to maximize robustness for certain ranges, allowing us to train a virtual sensor for fuel injection which is provably guaranteed to have at most 10.69% relative error under noise while maintaining 3% relative error on non-adversarial data within normalized fuel injection ranges of 0.6 to 1.0.

Citations (7)

Summary

  • The paper introduces a provably robust training approach that computes duality-based performance bounds to guarantee sensor reliability under adversarial noise.
  • It demonstrates that standard neural models suffer steep error increases under perturbations, while the robust model limits worst-case mean relative error to 16.84%.
  • The study explores targeted robust training to achieve improved accuracy in critical injection ranges, reducing test errors to 3% with bounded worst-case performance.

Provable Performance of Neural Network Virtual Sensors

This paper addresses the problem of ensuring the reliability and robustness of neural networks used as virtual sensors in engine controllers. It explores the application of provably robust training methods to guarantee the performance of these sensors under noisy conditions, a critical factor for fuel efficiency and engine safety. The work demonstrates that even simple neural network models are vulnerable to adversarial sensor noise and proposes a method to train models with provable guarantees on their performance.

Background and Motivation

The adoption of neural networks in safety-critical applications has been hindered by the lack of performance guarantees and interpretability. Adversarial examples highlight the brittleness of these models, raising concerns about their reliability in real-world scenarios. While provably robust training methods have been developed to address this issue, their application has primarily focused on vision and language domains. This paper extends these methods to a regression problem relevant to the automotive industry: learning a virtual sensor for fuel injection quantities.

Approach

The core idea involves adapting duality-based methods for learning provably robust networks [(2007.00147), Wong et al., 2018] to the regression setting. This approach allows for computing a bound J(x)J(x) on the worst-case output of the network subject to a perturbation set B(x)\mathcal{B}(x) around input xx:

maxzB(x)f(z)cJ(x;c)\max_{z \in \mathcal B(x)} f(z)\cdot c \leq J(x; c)

By computing lower and upper bounds on the network output, a bound on the mean squared error can be derived, which is then used to train the robust regression model. This process effectively provides guaranteed certificates on the output of the neural network.

Experiments and Results

The experiments are conducted on a fuel injection dataset with time-series sensor readings. The vulnerability of standard neural networks to adversarial examples is demonstrated, with the mean relative error increasing from 6.6% to 43.8% under perturbed sensor readings (Figure 1). Figure 1

Figure 1: An example of an adversarially perturbed time series sequence for a non-robust fuel injection controller, where the horizontal axis denotes time and the vertical axis denotes the sensor reading.

The authors then train a provably robust model, achieving a significantly lower mean relative error of 16.40% under adversarial perturbations. The duality-based bound guarantees a maximum of 16.84% worst-case mean relative error, which is a substantial improvement over the baselines. Further, the paper explores targeted robust training, where robustness is prioritized within a limited output range. This approach allows for achieving a 3% relative test error with a guaranteed maximum of 10.69% worst-case relative error for normalized fuel injection quantities between 0.6 and 1.0 (Figure 2). Figure 2

Figure 2: Test performance of a model targeted to be robust over the higher range of fuel injection quantities.

Implications and Future Directions

This work demonstrates the feasibility of using provably robust training methods for real-world regression problems beyond image classification. By providing performance guarantees under noisy conditions, this approach enhances the reliability and trustworthiness of neural network models in safety-critical applications. The targeted robust training technique offers a way to balance trade-offs between standard performance and robustness, allowing for customization based on specific application requirements. Future research could explore the use of more complex network architectures and the development of tighter verification bounds to further improve the performance and guarantees of robust virtual sensors.

Conclusion

This paper successfully adapts and applies provably robust training methods to the problem of learning virtual sensors for fuel injection quantities. The results demonstrate the vulnerability of standard models to adversarial noise and the effectiveness of robust training in providing performance guarantees. The targeted robust training approach offers a practical solution for achieving both high accuracy and robustness within specific operating ranges.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.