Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning (1906.06717v4)

Published 16 Jun 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Rapid advancements in deep learning have led to many recent breakthroughs. While deep learning models achieve superior performance, often statistically better than humans, their adoption into safety-critical settings, such as healthcare or self-driving cars is hindered by their inability to provide safety guarantees or to expose the inner workings of the model in a human understandable form. We present Mo\"ET, a novel model based on Mixture of Experts, consisting of decision tree experts and a generalized linear model gating function. Thanks to such gating function the model is more expressive than the standard decision tree. To support non-differentiable decision trees as experts, we formulate a novel training procedure. In addition, we introduce a hard thresholding version, Mo\"ETH, in which predictions are made solely by a single expert chosen via the gating function. Thanks to that property, Mo\"ETH allows each prediction to be easily decomposed into a set of logical rules in a form which can be easily verified. While Mo\"ET is a general use model, we illustrate its power in the reinforcement learning setting. By training Mo\"ET models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models. Moreover, we show that Mo\"ET can also be used in real-world supervised problems on which it outperforms other verifiable machine learning models.

Citations (21)

Summary

We haven't generated a summary for this paper yet.