Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Posthoc Interpretability of Learning to Rank Models using Secondary Training Data (1806.11330v1)

Published 29 Jun 2018 in cs.IR

Abstract: Predictive models are omnipresent in automated and assisted decision making scenarios. But for the most part they are used as black boxes which output a prediction without understanding partially or even completely how different features influence the model prediction avoiding algorithmic transparency. Rankings are ordering over items encoding implicit comparisons typically learned using a family of features using learning-to-rank models. In this paper we focus on how best we can understand the decisions made by a ranker in a post-hoc model agnostic manner. We operate on the notion of interpretability based on explainability of rankings over an interpretable feature space. Furthermore we train a tree based model (inherently interpretable) using labels from the ranker, called secondary training data to provide explanations. Consequently, we attempt to study how well does a subset of features, potentially interpretable, explain the full model under different training sizes and algorithms. We do experiments on the learning to rank datasets with 30k queries and report results that serve show in certain settings we can learn a faithful interpretable ranker.

Citations (44)

Summary

We haven't generated a summary for this paper yet.