Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pantypes: Diverse Representatives for Self-Explainable Models (2403.09383v1)

Published 14 Mar 2024 in stat.ML and cs.LG

Abstract: Prototypical self-explainable classifiers have emerged to meet the growing demand for interpretable AI systems. These classifiers are designed to incorporate high transparency in their decisions by basing inference on similarity with learned prototypical objects. While these models are designed with diversity in mind, the learned prototypes often do not sufficiently represent all aspects of the input distribution, particularly those in low density regions. Such lack of sufficient data representation, known as representation bias, has been associated with various detrimental properties related to machine learning diversity and fairness. In light of this, we introduce pantypes, a new family of prototypical objects designed to capture the full diversity of the input distribution through a sparse set of objects. We show that pantypes can empower prototypical self-explainable models by occupying divergent regions of the latent space and thus fostering high diversity, interpretability and fairness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Towards robust interpretability with self-explaining neural networks. Advances in neural information processing systems, 31.
  2. ICE: A statistical approach to identifying endmembers in hyperspectral images. IEEE transactions on Geoscience and Remote Sensing, 42(10): 2085–2095.
  3. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, 77–91. PMLR.
  4. How to be fair and diverse? arXiv preprint arXiv:1610.07183.
  5. This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems, 32.
  6. A cluster separation measure. IEEE transactions on pattern analysis and machine intelligence, 224–227.
  7. Protovae: A trustworthy self-explainable prototypical variational model. Advances in Neural Information Processing Systems, 35: 17940–17952.
  8. This looks more like that: Enhancing self-explaining models by prototypical relevance propagation. Pattern Recognition, 136: 109172.
  9. Diverse sequential subset selection for supervised video summarization. Advances in neural information processing systems, 27.
  10. A neural representation of sketch drawings. arXiv preprint arXiv:1704.03477.
  11. Mithracoverage: a system for investigating population bias for intersectional fairness. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, 2721–2724.
  12. Understanding dimensional collapse in contrastive self-supervised learning. arXiv preprint arXiv:2110.09348.
  13. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research, 9(2).
  14. Determinantal point processes for machine learning. Foundations and Trends® in Machine Learning, 5(2–3): 123–286.
  15. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278–2324.
  16. Learning mixtures of submodular shells with application to document summarization. arXiv preprint arXiv:1210.4871.
  17. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, 3730–3738.
  18. An other-race effect for face recognition algorithms. ACM Transactions on Applied Perception (TAP), 8(2): 1–11.
  19. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626.
  20. A Survey on Techniques for Identifying and Resolving Representation Bias in Data. arXiv preprint arXiv:2203.11852.
  21. Simpson, E. H. 1949. Measurement of diversity. nature, 163(4148): 688–688.
  22. A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002, 2(8).
  23. Explainable artificial intelligence: a systematic review. arXiv preprint arXiv:2006.00093.
  24. Interpretable image recognition by constructing transparent embedding space. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 895–904.
  25. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747.
  26. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579.
  27. Age progression/regression by conditional adversarial autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5810–5818.
  28. Solving the apparent diversity-accuracy dilemma of recommender systems. Proceedings of the National Academy of Sciences, 107(10): 4511–4515.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com