Symbolic Imitation Learning: From Black-Box to Explainable Driving Policies (2309.16025v1)
Abstract: Current methods of imitation learning (IL), primarily based on deep neural networks, offer efficient means for obtaining driving policies from real-world data but suffer from significant limitations in interpretability and generalizability. These shortcomings are particularly concerning in safety-critical applications like autonomous driving. In this paper, we address these limitations by introducing Symbolic Imitation Learning (SIL), a groundbreaking method that employs Inductive Logic Programming (ILP) to learn driving policies which are transparent, explainable and generalisable from available datasets. Utilizing the real-world highD dataset, we subject our method to a rigorous comparative analysis against prevailing neural-network-based IL methods. Our results demonstrate that SIL not only enhances the interpretability of driving policies but also significantly improves their applicability across varied driving situations. Hence, this work offers a novel pathway to more reliable and safer autonomous driving systems, underscoring the potential of integrating ILP into the domain of IL.
- Inductive logic programming in answer set programming. In International conference on inductive logic programming, pp. 91–97. Springer, 2011.
- Inductive logic programming at 30: a new introduction. Journal of Artificial Intelligence Research, 74:765–850, 2022.
- Learning programs by learning from failures. Machine Learning, 110:801–856, 2021.
- Turning 30: New ideas in inductive logic programming. arXiv preprint arXiv:2002.11002, 2020.
- Explainable artificial intelligence: A survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO), pp. 0210–0215. IEEE, 2018.
- Survey of imitation learning for robotic manipulation. International Journal of Intelligent Robotics and Applications, 3:362–369, 2019.
- A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, pp. 1259–1277. PMLR, 2020.
- Neuro-symbolic artificial intelligence: The state of the art. 2022.
- Interpretable learning for self-driving cars by visualizing causal attention. In Proceedings of the IEEE international conference on computer vision, pp. 2942–2950, 2017.
- Neuro-symbolic reinforcement learning with first-order logic. arXiv preprint arXiv:2110.10963, 2021.
- The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 2118–2125. IEEE, 2018.
- Vladimir Lifschitz. Answer set programming. Springer Heidelberg, 2019.
- Agile autonomous driving using end-to-end deep imitation learning. arXiv preprint arXiv:1709.07174, 2017.
- Imitation learning for agile autonomous driving. The International Journal of Robotics Research, 39(2-3):286–302, 2020.
- Neuro-symbolic artificial intelligence. AI Communications, 34(3):197–209, 2021.
- Towards safe autonomous driving policies using a neuro-symbolic deep reinforcement learning approach. arXiv preprint arXiv:2307.01316, 2023.
- An interpretable deep reinforcement learning approach to autonomous driving. In IJCAI Workshop on Artificial Intelligence for Automous Driving, 2022.
- Neuro-symbolic program search for autonomous driving decision module design. In Conference on Robot Learning, pp. 21–30. PMLR, 2021.
- A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems, 32(11):4793–4813, 2020.
- Reinforcement learning with knowledge representation and reasoning: A brief survey. arXiv preprint arXiv:2304.12090, 2023.
- Query-efficient imitation learning for end-to-end autonomous driving. arXiv preprint arXiv:1605.06450, 2016.
- A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computational Intelligence, 5(5):726–742, 2021.
- Off-policy imitation learning from observations. Advances in Neural Information Processing Systems, 33:12402–12413, 2020.
- Differentiable logic machines. arXiv preprint arXiv:2102.11529, 2021.