2000 character limit reached
A Formal Framework to Characterize Interpretability of Procedures (1707.03886v1)
Published 12 Jul 2017 in cs.AI
Abstract: We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods in our framework portraying its general applicability.
- Amit Dhurandhar (62 papers)
- Vijay Iyengar (3 papers)
- Ronny Luss (27 papers)
- Karthikeyan Shanmugam (85 papers)