Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability (2205.08685v1)

Published 18 May 2022 in cs.LG and cs.AI

Abstract: Although deep Reinforcement Learning (RL) has proven successful in a wide range of tasks, one challenge it faces is interpretability when applied to real-world problems. Saliency maps are frequently used to provide interpretability for deep neural networks. However, in the RL domain, existing saliency map approaches are either computationally expensive and thus cannot satisfy the real-time requirement of real-world scenarios or cannot produce interpretable saliency maps for RL policies. In this work, we propose an approach of Distillation with selective Input Gradient Regularization (DIGR) which uses policy distillation and input gradient regularization to produce new policies that achieve both high interpretability and computation efficiency in generating saliency maps. Our approach is also found to improve the robustness of RL policies to multiple adversarial attacks. We conduct experiments on three tasks, MiniGrid (Fetch Object), Atari (Breakout) and CARLA Autonomous Driving, to demonstrate the importance and effectiveness of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jinwei Xing (6 papers)
  2. Takashi Nagata (3 papers)
  3. Xinyun Zou (6 papers)
  4. Emre Neftci (46 papers)
  5. Jeffrey L. Krichmar (17 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.