Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Additive Models: Interpretable Machine Learning with Neural Nets (2004.13912v2)

Published 29 Apr 2020 in cs.LG, cs.AI, and stat.ML

Abstract: Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks. However, their accuracy comes at the cost of intelligibility: it is usually unclear how they make their decisions. This hinders their applicability to high stakes decision-making domains such as healthcare. We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing state-of-the-art generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees. To demonstrate this, we show how NAMs can be used for multitask learning on synthetic data and on the COMPAS recidivism data due to their composability, and demonstrate that the differentiability of NAMs allows them to train more complex interpretable models for COVID-19.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Rishabh Agarwal (47 papers)
  2. Levi Melnick (3 papers)
  3. Nicholas Frosst (10 papers)
  4. Xuezhou Zhang (36 papers)
  5. Ben Lengerich (7 papers)
  6. Rich Caruana (42 papers)
  7. Geoffrey Hinton (38 papers)
Citations (359)

Summary

Neural Additive Models: Interpretable Machine Learning with Neural Nets

The paper presents Neural Additive Models (NAMs), a novel approach aimed at achieving both interpretability and accuracy in machine learning models by combining the flexibility of deep neural networks with the transparency of Generalized Additive Models (GAMs). In order to tackle the common “black-box” problem associated with deep neural networks (DNNs), NAMs propose a model that preserves interpretability by learning a linear combination of neural networks, each corresponding to a single input feature. This is an ambitious attempt to leverage the strengths of DNNs while still maintaining the interpretability essential for applications in critical domains such as healthcare, finance, and criminal justice.

NAMs belong to the family of GAMs, with each component function of the model represented as a neural network that learns the contribution of individual input features independently. This structure provides a clear interpretability advantage: The influence of each feature on the output is separated, allowing for easy visualization and understanding of feature impacts.

Key Contributions

  1. Model Design and Training: NAMs introduce a model architecture where each feature is associated with a neural network that maps the feature to a shape function, which can be summed to predict the output. These networks are trained using backpropagation and incorporate differentiable elements, enabling the fitting of complex shape functions.
  2. Comparative Performance: The paper reports that NAMs achieve states of the comparable performance to existing GAMs with boosted trees, while being more flexible due to their neural network-based nature. This is supported by experiments across regression and classification tasks, showing that NAMs typically outperform simpler interpretable models like logistic regression and decision trees.
  3. Regularization Techniques: To ensure that the learned functions are neither overly smooth nor jumpy, strict regularization techniques are applied. This includes dropout, weight decay, output penalty, and feature dropout.
  4. Practical Application: The utility of NAMs is demonstrated in multitask learning and parameter generation problems, extending their usage to settings beyond traditional GAMs. For example, NAMs can be flexibly applied to multitask learning for datasets like COMPAS, elucidating separate relationships for different demographic groups.
  5. Interpretability and Flexibility: NAMs foster interpretability by making the exact decision-making process of the model transparent. Each shape plot serves as an exact representation of how predictions are made, which is critical in high-stakes applications.

Implications and Future Directions

The introduction of NAMs signals an important step towards making neural networks interpretable, thereby broadening their applicability in trust-critical fields. Their differentiable and modular architecture allows NAMs to serve as components in larger neural networks while retaining interpretability. Moreover, NAMs could potentially be integrated with other deep learning paradigms to create hybrid models that balance accuracy and intelligibility.

Future research could focus on enhancing the expressivity of NAMs by efficiently incorporating interactions between features, evaluating their performance on more complex datasets, especially in domains like computer vision or natural language processing, where feature interpretability is often more challenging. Also, exploring non-standard activation functions and initializations like ExU units which have shown promise in learning jumpy functions could widen the applicability of NAMs.

In conclusion, NAMs provide a promising template for interpretable neural networks, blending flexibility, transparency, and state-of-the-art performance in a design that respects the needs of end-users who demand interpretability without sacrificing the robust functionality of neural networks.

Youtube Logo Streamline Icon: https://streamlinehq.com