Does Explainable AI Have Moral Value? (2311.14687v1)
Abstract: Explainable AI (XAI) aims to bridge the gap between complex algorithmic systems and human stakeholders. Current discourse often examines XAI in isolation as either a technological tool, user interface, or policy mechanism. This paper proposes a unifying ethical framework grounded in moral duties and the concept of reciprocity. We argue that XAI should be appreciated not merely as a right, but as part of our moral duties that helps sustain a reciprocal relationship between humans affected by AI systems. This is because, we argue, explanations help sustain constitutive symmetry and agency in AI-led decision-making processes. We then assess leading XAI communities and reveal gaps between the ideal of reciprocity and practical feasibility. Machine learning offers useful techniques but overlooks evaluation and adoption challenges. Human-computer interaction provides preliminary insights but oversimplifies organizational contexts. Policies espouse accountability but lack technical nuance. Synthesizing these views exposes barriers to implementable, ethical XAI. Still, positioning XAI as a moral duty transcends rights-based discourse to capture a more robust and complete moral picture. This paper provides an accessible, detailed analysis elucidating the moral value of explainability.
- Interactive natural language technology for explainable artificial intelligence. In TAILOR, volume 12641 of Lecture Notes in Computer Science, pages 63–70. Springer.
- Asaro, P. (2020). Autonomous weapons and the ethics of artificial intelligence. Ethics of Artificial Intelligence, 212.
- Does the whole exceed its parts? the effect of AI explanations on complementary team performance. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 81:1–81:16. ACM.
- Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115.
- From responsibility to reason-giving explainable artificial intelligence. Philosophy & Technology, 35(1):12.
- Think about the stakeholders first! towards an algorithmic transparency playbook for regulatory compliance. CoRR, abs/2207.01482.
- It’s just not that simple: An empirical study of the accuracy-explainability trade-off in machine learning for public policy. In FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022, pages 248–266. ACM.
- How cognitive biases affect xai-assisted decision-making: A systematic review. In Conitzer, V., Tasioulas, J., Scheutz, M., Calo, R., Mara, M., and Zimmermann, A., editors, AIES ’22: AAAI/ACM Conference on AI, Ethics, and Society, Oxford, United Kingdom, May 19 - 21, 2021, pages 78–91. ACM.
- Explainable machine learning in deployment. In Hildebrandt, M., Castillo, C., Celis, L. E., Ruggieri, S., Taylor, L., and Zanfir-Fortuna, G., editors, FAT* ’20: Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, January 27-30, 2020, pages 648–657. ACM.
- Brand, J. (2023a). Exploring the moral value of explainable artificial intelligence through public service postal banks. In AIES ’23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, Montreal, QC, pages 990–992. ACM.
- Brand, J. L. M. (2023b). Why reciprocity prohibits autonomous weapons systems in war. AI Ethics, 3(2):619–624.
- Broniatowski, D. (2021). Psychological foundations of explainability and interpretability in artificial intelligence. Technical report, ,.
- Quod erat demonstrandum? - towards a typology of the concept of explanation for the design of explainable AI. Expert Syst. Appl., 213(Part):118888.
- Explaining decision-making algorithms through ui: Strategies to help non-expert stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, page 1–12, New York, NY, USA. Association for Computing Machinery.
- Darwall, S. (2009). The second-person standpoint: Morality, respect, and accountability. Harvard University Press.
- Ebers, M. (2022). Explainable AI in the European Union: An Overview of the Current Legal Framework(s). The Swedish Law and Informatics Research Institute, pages 103–132.
- Enslaving the algorithm: From a "right to an explanation" to a "right to better decisions"? IEEE Secur. Priv., 16(3):46–54.
- European Commission (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance).
- In search of verifiability: Explanations rarely enable complementary performance in ai-advised decision making. CoRR, abs/2305.07722.
- Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68(1):5–20.
- From AI ethics principles to data science practice: a reflection and a gap analysis based on recent frameworks and practical experience. AI Ethics, 2(4):697–711.
- Knowledge distillation: A survey. International Journal of Computer Vision, 129:1789–1819.
- A survey of methods for explaining black box models. ACM Comput. Surv., 51(5):93:1–93:42.
- Varieties of AI explanations under the law. from the GDPR to the aia, and beyond. In Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, K., and Samek, W., editors, xxAI - Beyond Explainable AI - International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, volume 13200 of Lecture Notes in Computer Science, pages 343–373. Springer.
- Impossible explanations? beyond explainable ai in the gdpr from a covid-19 use case scenario. In ,, FAccT ’21, page 549–559, New York, NY, USA. Association for Computing Machinery.
- High-Level Expert Group on Artificial Intelligence (2019). Ethics Guidelines for Trustworthy AI.
- Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 600.
- Hume, D. (1998). An enquiry concerning the principles of morals: a critical edition, volume 4. Oxford University Press.
- Operationalising AI ethics: how are companies bridging the gap between practice and principles? an exploratory study. AI Soc., 37(4):1663–1687.
- Algorithmic recourse: from counterfactual explanations to interventions. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 353–362. Kaur et al., (2021) Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H. M., and Vaughan, J. W. (2021). Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In Dragut, E. C., Li, Y., Popa, L., and Vucetic, S., editors, 3rd Workshop on Data Science with Human in the Loop, DaSH@KDD, Virtual Conference, August 15, 2021. (34) Korsgaard, C. M. (1996a). Creating the kingdom of ends. Cambridge University Press. (35) Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H. M., and Vaughan, J. W. (2021). Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In Dragut, E. C., Li, Y., Popa, L., and Vucetic, S., editors, 3rd Workshop on Data Science with Human in the Loop, DaSH@KDD, Virtual Conference, August 15, 2021. (34) Korsgaard, C. M. (1996a). Creating the kingdom of ends. Cambridge University Press. (35) Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (1996a). Creating the kingdom of ends. Cambridge University Press. (35) Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In Dragut, E. C., Li, Y., Popa, L., and Vucetic, S., editors, 3rd Workshop on Data Science with Human in the Loop, DaSH@KDD, Virtual Conference, August 15, 2021. (34) Korsgaard, C. M. (1996a). Creating the kingdom of ends. Cambridge University Press. (35) Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (1996a). Creating the kingdom of ends. Cambridge University Press. (35) Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Korsgaard, C. M. (1996a). Creating the kingdom of ends. Cambridge University Press. (35) Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Korsgaard, C. M. (1996b). The Sources of Normativity. Cambridge University Press. Korsgaard, (2018) Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Korsgaard, C. M. (2018). Fellow creatures: Our obligations to the other animals. Oxford University Press. Liao et al., (2020) Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Gruen, D. M., and Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Questioning the AI: informing design practices for explainable AI user experiences. In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R., editors, CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1–15. ACM. Liao et al., (2022) Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable AI. In Hsu, J. and Yin, M., editors, Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022, virtual, November 6-10, 2022, pages 147–159. AAAI Press. Lipton, (2018) Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Lipton, Z. C. (2018). The mythos of model interpretability. Commun. ACM, 61(10):36–43. Lundberg et al., (2020) Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522–5839. Lundberg and Lee, (2017) Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc. Malgieri and Comandé, (2017) Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Malgieri, G. and Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7(4):243–265. May, (2015) May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- May, S. C. (2015). Directed duties. Philosophy Compass, 10(8):523–532. Moor, (2006) Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Moor, J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4):18–21. Nannini et al., (2023) Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nannini, L., Balayn, A., and Smith, A. L. (2023). Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Explainability in AI policies: A critical review of communications, reports, regulations, and standards in the eu, us, and UK. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, pages 1198–1212. ACM. Navarro et al., (2021) Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Navarro, C. M., Kanellos, G., and Gottron, T. (2021). Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Desiderata for explainable AI in statistical production systems of the european central bank. In PKDD/ECML Workshops (1), volume 1524 of Communications in Computer and Information Science, pages 575–590. Springer. Nazar et al., (2021) Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Nazar, M., Alam, M. M., Yafi, E., and Su’ud, M. M. (2021). A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 9:153316–153348. - USA, 2021 (NIST)(NIST) - USA (2021). Artificial intelligence: Ai fundamental research - explainability. Parliament and Council, (2021) Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Parliament and Council, (2022) Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2022). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts - general approach. Parliament and Council, (2023) Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Parliament, E. and Council (2023). Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Amendments adopted by the european parliament on 14 june 2023 on the proposal for a regulation of the european parliament and of the council on laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (com(2021)0206 – c9-0146/2021 – 2021/0106(cod))1. Phillips et al., (2021) Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Phillips, P. J., Hahn, C., Fontana, P., Yates, A., Greene, K. K., Broniatowski, D., and Przybocki, M. A. (2021). Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Four principles of explainable artificial intelligence. Ribeiro et al., (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Robbins, (2019) Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds Mach., 29(4):495–514. Rudin, (2019) Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell., 1(5):206–215. Sambasivan et al., (2021) Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P. K., and Aroyo, L. (2021). "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- "everyone wants to do the model work, not the data work": Data cascades in high-stakes AI. In Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., and Drucker, S. M., editors, CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, pages 39:1–39:15. ACM. Shulner-Tal et al., (2022) Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Shulner-Tal, A., Kuflik, T., and Kliger, D. (2022). Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol., 24(1):2. Unceta et al., (2020) Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Unceta, I., Nin, J., and Pujol, O. (2020). Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Copying machine learning classifiers. IEEE Access, 8:160268–160284. Vasconcelos et al., (2023) Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., and Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum. Comput. Interact., 7(CSCW1):1–38. (60) Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B., and Floridi, L. (2017a). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99. (61) Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wachter, S., Mittelstadt, B. D., and Russell, C. (2017b). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Counterfactual explanations without opening the black box: Automated decisions and the GDPR. CoRR, abs/1711.00399. Wang et al., (2019) Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, D., Yang, Q., Abdul, A. M., and Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Designing theory-driven user-centric explainable AI. In Brewster, S. A., Fitzpatrick, G., Cox, A. L., and Kostakos, V., editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM. Wang and Yin, (2021) Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Wang, X. and Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Are explanations helpful? A comparative study of the effects of explanations in ai-assisted decision-making. In Hammond, T., Verbert, K., Parra, D., Knijnenburg, B. P., O’Donovan, J., and Teale, P., editors, IUI ’21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021, pages 318–328. ACM. Watson, (2021) Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Watson, L. (2021). The right to know: Epistemic rights and why we need them. Routledge. Weil, (1952) Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Weil, S. (1952). The Need for Roots: Prelude to a Declaration of Duties Towards Mankind. Routledge, New York. Zerilli, (2022) Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19. Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Zerilli, J. (2022). Explaining machine learning decisions. Philosophy of Science, 89(1):1–19.
- Joshua L. M. Brand (1 paper)
- Luca Nannini (4 papers)