Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contestable AI needs Computational Argumentation (2405.10729v2)

Published 17 May 2024 in cs.AI

Abstract: AI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-making (e.g. GDPR). In this position paper we explore how contestability can be achieved computationally in and for AI. We argue that contestable AI requires dynamic (human-machine and/or machine-machine) explainability and decision-making processes, whereby machines can (i) interact with humans and/or other machines to progressively explain their outputs and/or their reasoning as well as assess grounds for contestation provided by these humans and/or other machines, and (ii) revise their decision-making processes to redress any issues successfully raised during contestation. Given that much of the current AI landscape is tailored to static AIs, the need to accommodate contestability will require a radical rethinking, that, we argue, computational argumentation is ideally suited to support.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (129)
  1. 2020. DAX: deep argumentative explanation for neural networks. CoRR abs/2012.05766.
  2. 2023. Explainable acceptance in probabilistic and incomplete abstract argumentation frameworks. Artif. Intell. 323:103967.
  3. 2022. Contestable ai by design: Towards a framework. Minds and Machines 1–27.
  4. 2023. Contestable camera cars: A speculative design exploration of public AI that is open and responsive to dispute. In Proc. of CHI, 8:1–8:16.
  5. Almada, M. 2019. Human intervention in automated decision-making: Toward the construction of contestable systems. In Proc. of ICAIL, 2–11.
  6. 2023. Diagnosis for post concept drift decision trees repair. In Proc. of KR, 23–33.
  7. 2017. Measuring the intensity of attacks in argumentation graphs with shapley value. In Proc. of IJCAI, 63–69.
  8. 2015. Query answering explanation in inconsistent datalog +/- knowledge bases. In Proc. of DEXA, 203–219.
  9. 2017. Towards artificial argumentation. AI Magazine 38(3):25–36.
  10. 2023. SpArX: Sparse argumentative explanations for neural networks. In Proc. of ECAI, volume 372, 149–156.
  11. 2018. Handbook of Formal Argumentation. College Publications.
  12. 2022. Limits and possibilities of forgetting in abstract argumentation. In Proc. of IJCAI, 2539–2545.
  13. 2010. Expanding argumentation frameworks: Enforcing and monotonicity results. In Proc. of COMMA, 75–86.
  14. 2015. AGM meets abstract argumentation: Expansion and revision for dung frameworks. In Proc. of IJCAI, 2734–2740.
  15. 2021. Enforcement in formal argumentation. FLAP 8(6):1623–1678.
  16. 2020. Forgetting an argument. In Proc. of AAAI, 2750–2757.
  17. 2018. Verification in incomplete argumentation frameworks. Artif. Intell. 264:1–26.
  18. 2021. Acceptance in incomplete argumentation frameworks. Artif. Intell. 295:103470.
  19. 2023. Forgetting aspects in assumption-based argumentation. In Proc. of KR, 86–96.
  20. 2023. Language models can explain neurons in language models. URL https://openaipublic. blob. core. windows. net/neuron-explainer/paper/index. html.(Date accessed: 14.05. 2023).
  21. 2007. A generative inquiry dialogue system. In Proc. of (AAMAS, 241.
  22. 2013. A logical theory about dynamics in abstract argumentation. In Proc. of SUM, volume 8078, 148–161.
  23. 2014. Argument-based mixed recommenders and their application to movie suggestion. Expert Syst. Appl. 41(14):6467–6482.
  24. 2022. Arg2P: an argumentation framework for explainable intelligent systems. J. Log. Comput. 32(2):369–401.
  25. 2021. The burden of persuasion in structured argumentation. In Proc. of ICAIL, 180–184.
  26. Cao, L. 2022. Ai in finance: challenges, techniques, and opportunities. ACM Computing Surveys (CSUR) 55(3):1–38.
  27. 2023. Human control: Definitions and algorithms. In Proc. of UAI, 271–281.
  28. 2017. Using argumentation to improve classification in natural language problems. ACM Trans. Internet Techn. 17(3):30:1–30:23.
  29. 2010. Change in abstract argumentation frameworks: Adding an argument. J. Artif. Intell. Res. 38:49–84.
  30. 2020. Data-empowered argumentation for dialectically explainable predictions. In Proc. of ECAI, 2449–2456.
  31. 1995. Extracting tree-structured representations of trained networks. In Proc of NIPS, 24–30.
  32. 2019a. Explanations by arbitrated argumentative dispute. Expert Systems with Applications 127:141–156.
  33. 2019b. Argumentation for explainable scheduling. In Proc. of AAAI, 2752–2759.
  34. 2021. Argumentative XAI: A survey. In Proc. of IJCAI, 4392–4399.
  35. Darwiche, A. 2020. Three modern roles for logic in AI. In Proc. of PODS 2020, 229–243. ACM.
  36. 2022. Multiagent dynamics of gradual argumentation semantics. In Proc. of AAMAS, 363–371.
  37. 2022. Machine learning for utility prediction in argument-based computational persuasion. In Proc. of AAAI, 5592–5599.
  38. 2018. Constraints and changes: A survey of abstract argumentation dynamics. Argument Comput. 9(3):223–248.
  39. 2024. From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space. Proc. of AAAI 21046–21054.
  40. 2009. Belief revision and argumentation theory. In Argumentation in Artificial Intelligence. Springer. 341–360.
  41. 2012. Mechanism design for argumentation-based persuasion. In Proc. of COMMA, 322–333.
  42. 2015a. Mechanism design for argumentation-based information-seeking and inquiry. In Proc. of PRIMA, 519–527.
  43. 2015b. On computing explanations in argumentation. In Proc. of AAAI, 1496–1502.
  44. Fan, X. 2018. On generating explainable plans with assumption-based argumentation. In Proc. of PRIMA, 344–361.
  45. 2022. Looking inside the black-box: Logic-based explanations for neural networks. In Proc. of KR.
  46. 2024. Argumentative large language models for explainable and contestable decision-making. CoRR abs/2405.02079.
  47. Gabriel, I. 2020. Artificial intelligence, values, and alignment. Minds Mach. 30(3):411–437.
  48. 2021. Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. Proc. of HCI 1–28.
  49. 2019a. Repairing learned controllers with convex optimization: A case study. In Proc. of CPAIOR, 364–373.
  50. 2019b. Factual and counterfactual explanations for black box decision making. IEEE Intell. Syst. 34(6):14–23.
  51. 2016. Cooperative inverse reinforcement learning. In Proc. of NeurIPS, 3909–3917.
  52. 2017. The off-switch game. In AAAI Workshops.
  53. 2022. Beyond explainability: justifiability and contestability of algorithmic decision systems. AI Soc. 37(4):1397–1410.
  54. 2022. Repairing misclassifications in neural networks using limited data. In Proc. of SAC, 1031–1038.
  55. Hicks, A. 2022. Transparency, compliance, and contestability when code is(n’t) law. In Proc. of NSPW, 130–142.
  56. 2017. Designing contestability: Interaction design, machine learning, and mental health. In Proc. of DIS, 95–99.
  57. Hunter, A. 2018. Towards a framework for computational persuasion with applications in behaviour change. Argument Comput. 9(1):15–40.
  58. 2019. Abduction-based explanations for machine learning models. In Proc. of AAAI, 1511–1519. AAAI Press.
  59. 2023. Formalising the robustness of counterfactual explanations for neural networks. In Proc. of AAAI, 14901–14909.
  60. 2024a. Recourse under model multiplicity via argumentative ensembling. In Proc. of AAMAS, 954–963.
  61. 2024b. Robust counterfactual explanations in machine learning: A survey. In Proc. of IJCAI.
  62. 2019. Twin-systems to explain artificial neural networks using case-based reasoning: Comparative tests of feature-weighting methods in ANN-CBR twins for XAI. In Proc. of IJCAI, 2708–2715.
  63. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In Proc. of ICML, 2673–2682.
  64. 2022. Shaping our tools: Contestability as a means to promote responsible algorithmic decision making in the professions. In Ethics of Data and Analytics. Auerbach Publications. 420–428.
  65. 2015. Identifying malicious behavior in multi-party bipolar argumentation debates. In Proc. of EUMAS, 267–278.
  66. 2023. Verification of semantic key point detection for aircraft pose estimation. In Proc. of KR, 757–762.
  67. Kouvaros, P. 2023. Towards formal verification of neuro-symbolic multi-agent systems. In Proc. of IJCAI, 7014–7019.
  68. 2019. Faithful and customizable explanations of black box models. In Proc. of AIES, 131–138.
  69. 2023. Robust explanations for human-neural multi-agent systems with formal verification. In Proc. of EUMAS, volume 14282 of Lecture Notes in Computer Science, 244–262.
  70. 2024. Promoting counterfactual robustness through diversity. In Proc. of AAAI, 21322–21330.
  71. 2023. Counterfactual explanations and model multiplicity: a relational verification view. In Proc. of KR, 763–768.
  72. 2021. Explanation-based human debugging of NLP models: A survey. Trans. Assoc. Comput. Linguistics 9:1508–1528.
  73. 2020. FIND: human-in-the-loop debugging deep text classifiers. In Proc. of EMNLP, 332–348.
  74. 2017. A unified approach to interpreting model predictions. In Proc. of NeurIPS, 4765–4774.
  75. 2021. Conceptualising contestability: Perspectives on contesting algorithmic decisions. Proc. of HCI 106:1–106:25.
  76. 2019. A grounded interaction protocol for explainable artificial intelligence. In Proc. of AAMAS, 1033–1041.
  77. 2022. Delivering trustworthy AI through formal XAI. In Proc. of AAAI, 12342–12350.
  78. 2002. Desiderata for agent argumentation protocols. In Proc. of AAMAS, 402–409.
  79. Miller, T. 2023. Explainable AI is dead, long live explainable ai!: Hypothesis-driven decision support using evaluative AI. In Proc. of FAccT, 333–342.
  80. 2013. Dynamics of knowledge in DeLP through argument theory change. Theory Pract. Log. Program. 13(6):893–957.
  81. 2019. Layer-wise relevance propagation: an overview. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer. 193–209.
  82. 2020. Deep learning for safe autonomous driving: Current challenges and future directions. IEEE Transactions on Intelligent Transportation Systems 22(7):4316–4336.
  83. 2019. Assessing heuristic machine learning explanations with model counting. In Proc. of SAT, volume 11628, 267–278.
  84. Niskanen, A. 2020. Computational Approaches to Dynamics and Uncertainty in Abstract Argumentation. Ph.D. Dissertation, University of Helsinki, Finland.
  85. 2011. Characterizing strong equivalence for argumentation frameworks. Artif. Intell. 175(14-15):1985–2009.
  86. 2021. A computational model of argumentation schemes for multi-agent systems. Argument Comput. 12(3):357–395.
  87. 2022. On interactive explanations as non-monotonic reasoning. In XAI Workshop@IJCAI2022.
  88. 2020. The four dimensions of contestable AI diagnostics - A patient-centric approach to explainable AI. Artif. Intell. Medicine 107:101901.
  89. 2020. Machine guides, human supervises: Interactive learning with global explanations. arXiv preprint arXiv:2009.09723.
  90. Poslad, S. 2007. Specifying protocols for multi-agent systems interaction. ACM Transactions on Autonomous and Adaptive Systems 2(4):15–es.
  91. 2023. Explaining random forests using bipolar argumentation and markov networks. In Proc. of AAAI, 9453–9460.
  92. 2020. Pagerank as an argumentation semantics. Proc. of COMMA  55.
  93. Prakken, H. 2023. Relating abstract and structured accounts of argumentation dynamics: the case of expansions. In Proc. of KR, 562–571.
  94. 2020. Argumentation as a framework for interactive explanations for recommendations. In Prof. of KR, 805–815.
  95. 2021. Argumentative explanations for interactive recommendations. Artif. Intell. 296:103506.
  96. 2018. Argumentation-based recommendations: Fantastic explanations and how to find them. In Proc. of IJCAI, volume 18, 1949–1955.
  97. 2023. Interactive explanations by conflict resolution via argumentative exchanges. In Proc. of KR, 582–592.
  98. 2023. On dynamics in structured argumentation formalisms. J. Artif. Intell. Res. 77:563–643.
  99. 2020. Culture-based explainable human-agent deconfliction. In Proc. of AAMAS, 1107–1115.
  100. 2016. ”why should I trust you?”: Explaining the predictions of any classifier. In Proc. of KDD, 1135–1144.
  101. 2018. Anchors: High-precision model-agnostic explanations. In Proc. of AAAI, 1527–1535.
  102. 2020. Interpretations are useful: Penalizing explanations to align neural networks with prior knowledge. Proc. of ICML 8086–8096.
  103. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. Proc. of IJCAI 2662–2670.
  104. 2015. Research priorities for robust and beneficial artificial intelligence. AI Mag. 36(4):105–114.
  105. 2023. Causal discovery and knowledge injection for contestable neural networks. In Proc. of ECAI, 2025–2032. IOS Press.
  106. Shaheen, M. Y. 2021. Applications of artificial intelligence (ai) in healthcare: A review. ScienceOpen Preprints.
  107. 2021. Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions. Proc. of AAAI 9533–9540.
  108. 2017. Learning important features through propagating activation differences. In Proc. of ICML, 3145–3153.
  109. 2017. Argument revision. Journal of Logic and Computation 27(7):2089–2134.
  110. 2015. Corrigibility. In AAAI of Artificial Intelligence and Ethics.
  111. 2022. Surrogate model-based explainability methods for point cloud nns. In Proc. of WACV, 2239–2248.
  112. 2019. Explanatory interactive machine learning. In Proc. of AIES, 239–245.
  113. 2018. An argumentative recommendation approach based on contextual aspects. In Proc. of SUM, volume 11142, 405–412.
  114. 2017. Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proc. of KDD, 465–474.
  115. 2020. Contestable black boxes. In Proc. of RuleML+RR, 159–167.
  116. 2021. Optimal policies tend to seek power. In Proc. of NeurIPS, 23063–23074.
  117. 2019. If nothing is accepted - repairing argumentation frameworks. J. Artif. Intell. Res. 66:1099–1145.
  118. 2021. Contestability for content moderation. Proc. of HCI 318:1–318:28.
  119. 2021. Argumentation and explainable artificial intelligence: a survey. The Knowledge Engineering Review 36.
  120. 2020. The philosophical basis of algorithmic recourse. In Proc. of FAT*, 284–293.
  121. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31:841.
  122. 2019. Deliberative explanations: visualizing network insecurities. In Proc. of NeurIPS, 1372–1383.
  123. 2023. Interpretability in the wild: a circuit for indirect object identification in GPT-2 small. In Proc. of ICLR.
  124. 2022. On agent incentives to manipulate human feedback in multi-agent reward learning scenarios. In Proc. of AAMAS, 1759–1761.
  125. 2023. Argument attribution explanations in quantitative bipolar argumentation frameworks. In Proc. of ECAI, 2898–2905.
  126. 2024. Explaining arguments’ strength: Unveiling the role of attacks and supports. In Proc. of IJCAI.
  127. 2023. Generating process-centric explanations to enable contestability in algorithmic decision-making: Challenges and opportunities. In Workshop on Human-Centred Explainable AI.
  128. 2023. Disentangling fairness perceptions in algorithmic decision-making: the effects of explanations, human oversight, and contestability. In Proc. of CHI, 134:1–134:21.
  129. 2024. Targeted Activation Penalties Help CNNs Ignore Spurious Signals. Proc. of AAAI 16705–16713.

Summary

We haven't generated a summary for this paper yet.