Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable agency: human preferences for simple or complex explanations (2403.12321v1)

Published 18 Mar 2024 in cs.HC

Abstract: Research in cognitive psychology has established that whether people prefer simpler explanations to complex ones is context dependent, but the question of `simple vs. complex' becomes critical when an artificial agent seeks to explain its decisions or predictions to humans. We present a model for abstracting causal reasoning chains for the purpose of explanation. This model uses a set of rules to progressively abstract different types of causal information in causal proof traces. We perform online studies using 123 Amazon MTurk participants and with five industry experts over two domains: maritime patrol and weather prediction. We found participants' satisfaction with generated explanations was based on the consistency of relationships among the causes (coherence) that explain an event; and that the important question is not whether people prefer simple or complex explanations, but what types of causal information are relevant to individuals in specific contexts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. D. A. Lambert, A. Saulwick, and K. Trentelman, “Consensus: A comprehensive solution to the grand challenges of information fusion,” in 18th International Conference on Information Fusion, Washington, DC, USA, July 6-9, 2015.   IEEE, 2015, pp. 908–915.
  2. K. Trentelman, R. Rafferty, A. Saulwick, and A. Ceglar, “Information Fusion for Maritime Domain Awareness: Illegal Fishing Detection (Poster),” in 2019 IEEE Conference on Cognitive and Computational Aspects of Situation Management.   IEEE, Apr. 2019, pp. 134–139.
  3. A. Saulwick, “Lexpresso: A Controlled Natural Language,” in Controlled Natural Language.   Springer International Publishing, 2014, vol. 8625, pp. 123–134.
  4. X. Huang, “Reconstructing proofs at the assertion level,” in Automated Deduction — CADE-12, J. G. Carbonell, J. Siekmann, G. Goos, J. Hartmanis, and A. Bundy, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 1994, vol. 814, pp. 738–752.
  5. H. Horacek, “Presenting proofs in a human-oriented way,” in International Conference on Automated Deduction.   Springer, 1999, pp. 142–156.
  6. ——, “How to build explanations of automated proofs: A methodology and requirements on domain representations,” in AAAI Workshop on Explanation-Aware Computing, ser. AAAI Technical Report.   AAAI Press, 2007, pp. 34–41.
  7. V. Furtado, P. P. da Silva, D. L. McGuinness, P. Deshwal, D. Narayanan, J. Carvalho, V. Pinheiro, and C. Chang, “Abstracting web agent proofs into human-level justifications,” in Proceedings of the Twentieth International Florida Artificial Intelligence Research Society Conference.   AAAI Press, 2007, pp. 80–85.
  8. T. Lombrozo and N. Vasilyeva, “Causal Explanation,” in The Oxford Handbook of Causal Reasoning, M. R. Waldmann, Ed.   Oxford University Press, 2017, pp. 415–432.
  9. J. C. Zemla, S. Sloman, C. Bechlivanidis, and D. A. Lagnado, “Evaluating everyday explanations,” Psychonomic Bulletin & Review, vol. 24, no. 5, pp. 1488–1500, 2017.
  10. M. Pacer and T. Lombrozo, “Ockham’s razor cuts to the root: Simplicity in causal explanation,” Journal of Experimental Psychology: General, vol. 146, no. 12, p. 1761, 2017.
  11. S. G. Johnson, J. Valenti, and F. C. Keil, “Simplicity and complexity preferences in causal explanation: An opponent heuristic account,” Cognitive Psychology, vol. 113, p. 101222, 2019.
  12. P. R. Thagard, “The best explanation: Criteria for theory choice,” The Journal of Philosophy, vol. 75, no. 2, pp. 76–92, 1978.
  13. L. Bovens and E. J. Olsson, “Coherentism, reliability and bayesian networks,” Mind, vol. 109, no. 436, pp. 685–719, 2000.
  14. L. Bovens and S. Hartmann, “Solving the riddle of coherence,” Mind, vol. 112, no. 448, pp. 601–633, 2003.
  15. B. Koslowski, J. Marasia, M. Chelenza, and R. Dublin, “Information becomes evidence when an explanation can incorporate it into a causal framework,” Cognitive Development, vol. 23, no. 4, pp. 472–487, 2008.
  16. D. Lambert and C. Nowak, “The Mephisto Conceptual Framework,” Defence Science and Technology Group, Tech. Rep. DSTO-TR-2162, 2008.
  17. D. L. McGuinness and P. Pinheiro da Silva, “Explaining answers from the Semantic Web: the Inference Web approach,” Journal of Web Semantics, vol. 1, no. 4, pp. 397–413, Oct. 2004.
  18. A. D. Preece, D. Harborne, D. Braines, R. Tomsett, and S. Chakraborty, “Stakeholders in explainable AI,” arXiv, vol. abs/1810.00184, 2018.
  19. P. Missier, J. Bryans, C. Gamble, V. Curcin, and R. Danger, “Provenance graph abstraction by node grouping,” School of Computing Science, University of Newcastle upon Tyne, Tech. Rep. CS-TR-1393, 2013.
  20. ——, “ProvAbs: model, policy, and tooling for abstracting PROV graphs,” in International Provenance and Annotation Workshop.   Springer, 2014, pp. 3–15.
  21. L. Moreau, “Aggregation by provenance types: A technique for summarising provenance graphs,” in Proceedings of the First Graphs as Models Workshop, GaM@ETAPS, ser. Electronic Proceedings in Theoretical Computer Science, vol. 181, 2015, pp. 129–144.
  22. P. Chen, B. Plale, Y.-W. Cheah, D. Ghoshal, S. Jensen, and Y. Luo, “Visualization of network data provenance,” in 19th International Conference on High Performance Computing.   IEEE, 2012, pp. 1–9.
  23. T. Lombrozo, “Simplicity and probability in causal explanation,” Cognitive Psychology, vol. 55, no. 3, pp. 232–257, 2007.
  24. J. B. Lim and M. Oppenheimer, “Explanatory preferences for complexity matching,” PLoS ONE, vol. 15, no. 4, pp. 1–19, 2020.
  25. C. Bechlivanidis, D. A. Lagnado, J. C. Zemla, and S. Sloman, “Concreteness and abstraction in everyday explanation,” Psychonomic Bulletin & Review, vol. 24, no. 5, pp. 1451–1464, 2017.
  26. B. Weatherson, “Explanation, idealisation and the goldilocks problem,” Philosophy and Phenomenological Research, vol. 84, no. 2, pp. 461–473, 2012.
  27. E. M. Burgoon, M. D. Henderson, and A. B. Markman, “There are many ways to see the forest for the trees: A tour guide for abstraction,” Perspectives on Psychological Science, vol. 8, no. 5, pp. 501–520, 2013.
  28. W. Boone and G. Piccinini, “Mechanistic abstraction,” Philosophy of Science, vol. 83, no. 5, pp. 686–697, Dec. 2016.
  29. C. F. Craver and D. M. Kaplan, “Are more details better? On the norms of completeness for mechanistic explanations,” The British Journal for the Philosophy of Science, vol. 71, no. 1, pp. 287–319, 2020.
  30. S. Aronowitz and T. Lombrozo, “Experiential explanation,” Topics in Cognitive Science, vol. 12, no. 4, pp. 1321–1336, 2020.
  31. T. Blanchard, “Explanatory abstraction and the goldilocks problem: Interventionism gets things just right,” The British Journal for the Philosophy of Science, vol. 71, no. 2, pp. 633–663, 2020.
  32. P. Groth and L. Moreau, “PROV-Overview: An overview of the PROV family of documents,” World Wide Web Consortium, 2013.
  33. P. Missier, “Provenance standards,” in Encyclopedia of Database Systems, L. Liu and M. T. Özsu, Eds.   New York, NY: Springer, 2018.
  34. A. G. Cohn, B. Bennett, J. Gooday, and M. Gotts, “Qualitative spatial representation and reasoning with the region connection calculus,” GeoInformatica, vol. 1, no. 3, 1997.
  35. R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman, “Metrics for explainable AI: challenges and prospects,” arXiv, vol. 1812.04608, 2018.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets