Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Basis Models for Interpretability (2205.14120v4)

Published 27 May 2022 in cs.LG and cs.CV

Abstract: Due to the widespread use of complex machine learning models in real-world applications, it is becoming critical to explain model predictions. However, these models are typically black-box deep neural networks, explained post-hoc via methods with known faithfulness limitations. Generalized Additive Models (GAMs) are an inherently interpretable class of models that address this limitation by learning a non-linear shape function for each feature separately, followed by a linear model on top. However, these models are typically difficult to train, require numerous parameters, and are difficult to scale. We propose an entirely new subfamily of GAMs that utilizes basis decomposition of shape functions. A small number of basis functions are shared among all features, and are learned jointly for a given task, thus making our model scale much better to large-scale data with high-dimensional features, especially when features are sparse. We propose an architecture denoted as the Neural Basis Model (NBM) which uses a single neural network to learn these bases. On a variety of tabular and image datasets, we demonstrate that for interpretable machine learning, NBMs are the state-of-the-art in accuracy, model size, and, throughput and can easily model all higher-order feature interactions. Source code is available at https://github.com/facebookresearch/nbm-spam.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Filip Radenovic (20 papers)
  2. Abhimanyu Dubey (35 papers)
  3. Dhruv Mahajan (38 papers)
Citations (40)

Summary

  • The paper introduces NBMs, a novel scalable and interpretable model that reduces parameters and improves throughput in high-dimensional settings.
  • It employs basis function decomposition to share learned functions across features, ensuring stability and efficient training compared to traditional GAMs.
  • Empirical evaluations show NBMs match black-box accuracy while offering a 5×–50× reduction in parameters and 4×–7× faster throughput in practice.

An Evaluation of Neural Basis Models for Interpretability in Machine Learning

The paper "Neural Basis Models for Interpretability" addresses a significant challenge in machine learning: the interpretability of predictions from complex models such as black-box deep neural networks. Traditional methods to improve interpretability often suffer from instability and unfaithfulness. Generalized Additive Models (GAMs) have been proposed as an inherently interpretable alternative, but they face issues related to training complexity, substantial parameter requirements, and difficulties in scaling. This paper advances the field by introducing Neural Basis Models (NBMs), a novel subfamily of GAMs that utilize basis decomposition of shape functions. Through this approach, the research provides a scalable solution for interpretable machine learning while maintaining state-of-the-art accuracy, model size, and processing throughput.

Summary of Contributions

Introduction of NBMs: The core innovation presented in the paper is the development of NBMs, which transform existing GAMs by employing a small set of basis functions shared across all features. This basis decomposition allows NBMs to efficiently handle high-dimensional data, particularly when dealing with sparse features. The architecture relies on a single neural network that learns these basis functions concurrently for a given task, which is pivotal for achieving scalability without sacrificing interpretability.

Scalability: NBMs significantly reduce the number of parameters compared to traditional GAMs, especially in scenarios with a large number of features. This reduction is starkly evident in contrast with other neural-based GAMs such as Neural Additive Models (NAMs). In datasets with over ten features, NBMs achieve a parameter count reduction of between 5×\times and 50×\times compared to NAMs. Furthermore, NBMs provide 4×\times to 7×\times better throughput. For extremely large and sparse datasets, NBMs are the only interpretable models that effectively scale.

Integration of Higher-order Interactions: NBMs can incorporate pairwise feature interactions akin to GA2^2Ms, with only a linear increase in complexity. This contrasts with other models like EB2^2Ms and NA2^2Ms that suffer from quadratic growth in parameters, often necessitating feature selection heuristics.

Empirical Evaluation: The paper extensively evaluates NBMs across various tasks including regression, binary classification, and multi-class classification on tabular, image, and sparse datasets. NBMs overall outperform existing GAM frameworks, providing significant computational benefits while matching black-box models on accuracy in many cases.

Interpretability and Stability: A key advantage of NBMs demonstrated in this paper is their stability. The basis functions shared among features contribute to this stability, providing consistent shape function outputs even with varying random initialization during training runs. This contrasts with NAMs where increased parameters can lead to more unstable outputs for features with low data density.

Implications and Future Directions

The implications of this research extend beyond theoretical contributions to practical applications. For high-risk domains like healthcare and finance, where model interpretability is critical, NBMs can replace or complement existing black-box models, enabling practitioners to understand and trust predictions. Furthermore, this approach opens up new avenues for scalable GAMs in scenarios where traditional methods falter, such as high-dimensional and sparse data environments.

The theoretical grounding using Reproducing Kernel Hilbert Spaces highlights the efficiency of NBMs, suggesting that as few as logD\log D basis functions might suffice for a robust representation. This insight could guide future enhancements, ensuring models remain scalable while retaining accuracy.

Future research might explore synergy between NBMs and other interpretability techniques, especially those leveraging different machine learning paradigms for generating interpretable models. Moreover, exploring visual interpretability, potentially extending NBMs to pixel or feature spaces in computer vision, offers a promising avenue.

In conclusion, the paper makes a substantial contribution by reconciling the interpretability and scalability tensions in machine learning models, potentially catalyzing wider adoption of GAMs in large-scale, mission-critical applications.

Github Logo Streamline Icon: https://streamlinehq.com