Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Interpretable and Trustworthy are GAMs? (2006.06466v2)

Published 11 Jun 2020 in cs.LG and stat.ML

Abstract: Generalized additive models (GAMs) have become a leading modelclass for interpretable machine learning. However, there are many algorithms for training GAMs, and these can learn different or even contradictory models, while being equally accurate. Which GAM should we trust? In this paper, we quantitatively and qualitatively investigate a variety of GAM algorithms on real and simulated datasets. We find that GAMs with high feature sparsity (only using afew variables to make predictions) can miss patterns in the data and be unfair to rare subpopulations. Our results suggest that inductive bias plays a crucial role in what interpretable models learn and that tree-based GAMs represent the best balance of sparsity, fidelity and accuracy and thus appear to be the most trustworthy GAM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chun-Hao Chang (14 papers)
  2. Sarah Tan (21 papers)
  3. Ben Lengerich (7 papers)
  4. Anna Goldenberg (41 papers)
  5. Rich Caruana (42 papers)
Citations (70)

Summary

We haven't generated a summary for this paper yet.