Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey (2403.14496v1)

Published 21 Mar 2024 in cs.HC and cs.AI

Abstract: Despite its technological breakthroughs, eXplainable Artificial Intelligence (XAI) research has limited success in producing the {\em effective explanations} needed by users. In order to improve XAI systems' usability, practical interpretability, and efficacy for real users, the emerging area of {\em Explainable Interfaces} (EIs) focuses on the user interface and user experience design aspects of XAI. This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development. This is among the first systematic survey of EI research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (111)
  1. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–18.
  2. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376615
  3. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6 (2018), 52138–52160.
  4. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82–115.
  5. DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 1 (March 2022), 1–30. https://doi.org/10.1145/3517224
  6. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–16. https://doi.org/10.1145/3411764.3445717
  7. Understanding your users: a practical guide to user research methods. Morgan Kaufmann.
  8. Explainable AI and Adoption of Financial Algorithmic Advisors: An Experimental Study. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Virtual Event USA, 390–400. https://doi.org/10.1145/3461702.3462565
  9. Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers. In Artificial Neural Networks and Machine Learning, Villa A., Masulli P., and Pons Rivero A (Eds.).
  10. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 807–819. https://doi.org/10.1145/3490099.3511139
  11. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, Cagliari Italy, 454–464. https://doi.org/10.1145/3377325.3377498
  12. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 1–21. https://doi.org/10.1145/3449287
  13. Always on (line)? User experience of smartwatches and their role within multi-device ecologies. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 3557–3568.
  14. Michael Chromik and Andreas Butz. 2021. Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces. In Human-Computer Interaction – INTERACT 2021 (Lecture Notes in Computer Science), Carmelo Ardito, Rosa Lanzilotti, Alessio Malizia, Helen Petrie, Antonio Piccinno, Giuseppe Desolda, and Kori Inkpen (Eds.). Springer International Publishing, Cham, 619–640. https://doi.org/10.1007/978-3-030-85616-8_36
  15. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 307–317. https://doi.org/10.1145/3397481.3450644
  16. Mind the (persuasion) gap: contrasting predictions of intelligent DSS with user beliefs to improve interpretability. In Proceedings of the 12th ACM SIGCHI Symposium on Engineering Interactive Computing Systems. ACM, Sophia Antipolis France, 1–6. https://doi.org/10.1145/3393672.3398491
  17. Juliet Corbin and Anselm Strauss. 2014. Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage publications.
  18. A survey of the state of explainable AI for natural language processing. arXiv preprint arXiv:2010.00711 (2020).
  19. Arun Das and Paul Rad. 2020. Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371 (2020).
  20. Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. ACM, Boulder CO USA, 351–360. https://doi.org/10.1145/3434073.3444657
  21. Devleena Das and Sonia Chernova. 2020. Leveraging rationales to improve human task performance. In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, Cagliari Italy, 510–518. https://doi.org/10.1145/3377325.3377512
  22. A survey on artificial intelligence (ai) and explainable ai in air traffic management: Current trends and development with future research trajectory. Applied Sciences 12, 3 (2022), 1295.
  23. Audrey Desjardins and Aubree Ball. 2018. Revealing tensions in autobiographical design in HCI. In proceedings of the 2018 designing interactive systems conference. 753–764.
  24. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. ACM, Virtual Event USA, 1591–1602. https://doi.org/10.1145/3461778.3462131
  25. How Do People Rank Multiple Mutant Agents?. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 191–211. https://doi.org/10.1145/3490099.3511115
  26. After-Action Review for AI (AAR/AI). ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Dec. 2021), 1–35. https://doi.org/10.1145/3453173
  27. Explaining reasoning algorithms with persuasiveness: a case study for a behavioural change system. In Proceedings of the 35th Annual ACM Symposium on Applied Computing. ACM, Brno Czech Republic, 646–653. https://doi.org/10.1145/3341105.3373910
  28. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
  29. Explainable artificial intelligence: A survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE, 0210–0215.
  30. Expanding Explainability: Towards Social Transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–19. https://doi.org/10.1145/3411764.3445188
  31. Interactive Machine Learning and Explainability in Mobile Classification of Forest-Aesthetics. In Proceedings of the 6th EAI International Conference on Smart Objects and Technologies for Social Good. ACM, Antwerp Belgium, 90–95. https://doi.org/10.1145/3411170.3411225
  32. Jesse James Garret. 2003. The elements of user experience: user-centered design for the web. Nueva York, NY: AIGA (2003).
  33. CPUX-UR Curriculum. UXQB e. V (2016).
  34. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (Jan. 2021), 1–28. https://doi.org/10.1145/3432934
  35. Kelly Gordon. 2021. Visual Hierarchy in UX: Definition. https://www.nngroup.com/articles/visual-hierarchy-ux-definition/
  36. David Gunning. 2017. Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web 2, 2 (2017), 1.
  37. Building Trust in Interactive Machine Learning via User Contributed Interpretable Rules. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 537–548. https://doi.org/10.1145/3490099.3511111
  38. Łukasz Górski and Shashishekar Ramakrishna. 2021. Explainable artificial intelligence, lawyer’s perspective. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law. ACM, São Paulo Brazil, 60–68. https://doi.org/10.1145/3462757.3466145
  39. Improving understandability of feature contributions in model-agnostic explainable AI tools. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–9. https://doi.org/10.1145/3491102.3517650
  40. Impossible Explanations?: Beyond explainable AI in the GDPR from a COVID-19 use case scenario. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, Virtual Event Canada, 549–559. https://doi.org/10.1145/3442188.3445917
  41. Explaining collaborative filtering recommendations. Proceedings of the 2000 ACM conference on Computer supported cooperative work - CSCW ’00 (2000), 241–250. https://doi.org/10.1145/358916.358995
  42. Diana C. Hernandez-Bocanegra and Jürgen Ziegler. 2021. Conversational review-based explanations for recommender systems: Exploring users’ query behavior. In CUI 2021 - 3rd Conference on Conversational User Interfaces. ACM, Bilbao (online) Spain, 1–11. https://doi.org/10.1145/3469595.3469596
  43. Arthur Hjorth. 2021. NaturalLanguageProcesing4All: - A Constructionist NLP tool for Scaffolding Students’ Exploration of Text. In Proceedings of the 17th ACM Conference on International Computing Education Research. ACM, Virtual Event USA, 347–354. https://doi.org/10.1145/3446871.3469749
  44. Kasper Hornbæk and Antti Oulasvirta. 2017. What is interaction?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 5040–5052.
  45. Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–14. https://doi.org/10.1145/3411764.3445385
  46. How can I choose an explainer?: An Application-grounded Evaluation of Post-hoc Explanations. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, Virtual Event Canada, 805–815. https://doi.org/10.1145/3442188.3445941
  47. Drug discovery with explainable artificial intelligence. Nature Machine Intelligence 2, 10 (2020), 573–584.
  48. A Cloud-based Robot System for Long-term Interaction: Principles, Implementation, Lessons Learned. ACM Transactions on Human-Robot Interaction 11, 1 (March 2022), 1–27. https://doi.org/10.1145/3481585
  49. Finding AI’s Faults with AAR/AI: An Empirical Study. ACM Transactions on Interactive Intelligent Systems 12, 1 (March 2022), 1–33. https://doi.org/10.1145/3487065
  50. Learn, Generate, Rank, Explain: A Case Study of Visual Explanation by Generative Machine Learning. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Dec. 2021), 1–34. https://doi.org/10.1145/3465407
  51. Sebin Kim and Jihwan Woo. 2021. Explainable AI framework for the financial rating models: Explaining framework that focuses on the feature influences on the changing classes or rating in various customer models used by the financial institutions.. In 2021 10th International Conference on Computing and Pattern Recognition. ACM, Shanghai China, 252–255. https://doi.org/10.1145/3497623.3497664
  52. Yubo Kou and Xinning Gui. 2020. Mediating Community-AI Interaction through Situated Explanation: The Case of AI-Led Moderation. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (Oct. 2020), 1–27. https://doi.org/10.1145/3415173
  53. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces. 126–137.
  54. Too much, too little, or just right? Ways explanations impact end users’ mental models. Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC (2013), 3–10. https://doi.org/10.1109/VLHCC.2013.6645235
  55. GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model’s Prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, Virtual Event CA USA, 238–248. https://doi.org/10.1145/3394486.3403066
  56. An Exploratory Study on Techniques for Quantitative Assessment of Stroke Rehabilitation Exercises. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. ACM, Genoa Italy, 303–307. https://doi.org/10.1145/3340631.3394872
  57. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–15. https://doi.org/10.1145/3313831.3376590
  58. Q. Vera Liao and Kush R. Varshney. 2022. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. http://arxiv.org/abs/2110.10790 arXiv:2110.10790 [cs].
  59. Antonios Liapis and Jichen Zhu. 2022. The Need for Explainability in AI-Based Creativity Support Tools. In Proceedings of the Human Centered AI workshop at NeurIPS 2022.
  60. Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–17. https://doi.org/10.1145/3411764.3445260
  61. Zachary C Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3 (2018), 31–57.
  62. Andrés Lucero. 2018. Living without a mobile phone: An autoethnography. In Proceedings of the 2018 Designing Interactive Systems Conference. 765–776.
  63. A sample of one: First-person research methods in HCI. In DIS 2019 Companion - Companion Publication of the 2019 ACM Designing Interactive Systems Conference. 385–388. https://doi.org/10.1145/3301019.3319996
  64. Keeping It “Organized and Logical”: After-Action Review for AI (AAR/AI). (2020), 12.
  65. XAI tools in the public sector: a case study on predicting combined sewer overflows. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. ACM, Athens Greece, 1032–1044. https://doi.org/10.1145/3468264.3468547
  66. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
  67. Explainable AI: Beware of inmates running the asylum or: How i learnt to stop worrying and love the social and behavioural sciences. In Proceedings of the IJCAI Workshop on Workshop on Explainable Artificial Intelligence.
  68. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of internal medicine 151, 4 (2009), 264–269.
  69. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Dec. 2021), 1–45. https://doi.org/10.1145/3387166
  70. How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. Technical Report.
  71. Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 170–174. https://doi.org/10.1145/3397481.3450658
  72. Carman Neustaedter and Phoebe Sengers. 2012. Autobiographical design in HCI research: designing and learning through use-it-yourself. In Proceedings of the Designing Interactive Systems Conference. 514–523.
  73. Thu Nguyen and Jichen Zhu. 2022. Towards Better User Requirements: How to Involve Human Participants in XAI Research. arXiv preprint arXiv:2212.03186 (2022).
  74. Jakob Nielsen and Rolf Molich. 1990. Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems. 249–256.
  75. Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 340–350. https://doi.org/10.1145/3397481.3450639
  76. Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction 27, 3 (2017), 393–444.
  77. Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–9. https://doi.org/10.1145/3491102.3502104
  78. Doctor XAI: an ontology-based approach to black-box sequential data classification explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 629–639. https://doi.org/10.1145/3351095.3372855
  79. Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. Journal of the American Medical Informatics Association 27, 7 (2020), 1173–1185.
  80. Toward Foraging for Understanding of StarCraft Agents: An Empirical Study. In 23rd International Conference on Intelligent User Interfaces. ACM, Tokyo Japan, 225–237. https://doi.org/10.1145/3172944.3172946
  81. Towards Trustworthiness in the Context of Explainable Search. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, Virtual Event Canada, 2580–2584. https://doi.org/10.1145/3404835.3462799
  82. Peizhu Qian. 2022. Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents. (2022), 9.
  83. Generating Perturbation-based Explanations with Robustness to Out-of-Distribution Data. In Proceedings of the ACM Web Conference 2022. ACM, Virtual Event, Lyon France, 3594–3605. https://doi.org/10.1145/3485447.3512254
  84. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data minin.
  85. Interaction design. Interaction Design: Beyond Human-Computer Interaction, Wiley (2011), 1–34.
  86. Explainable artificial intelligence (xai) on timeseries data: A survey. arXiv preprint arXiv:2104.00950 (2021).
  87. Getting the message? A study of explanation interfaces for microblog data analysis. In International Conference on Intelligent User Interfaces, Proceedings IUI, Vol. 2015-Janua. 345–356. https://doi.org/10.1145/2678025.2701406
  88. QuestionComb: A Gamification Approach for the Visual Explanation of Linguistic Phenomena through Interactive Labeling. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Dec. 2021), 1–38. https://doi.org/10.1145/3429448
  89. Learning Important Features Through Propagating Activation Differences. In Proceedings of the 34th International Conference on Machine Learning.
  90. David Silverman. 2020. Qualitative research. sage.
  91. Elizabeth I. Sklar and Mohammad Q. Azhar. 2018. Explanation through Argumentation. In Proceedings of the 6th International Conference on Human-Agent Interaction (HAI ’18). Association for Computing Machinery, New York, NY, USA, 277–285. https://doi.org/10.1145/3284432.3284470
  92. Explaining Machine Learning Models for Clinical Gait Analysis. ACM Transactions on Computing for Healthcare 3, 2 (April 2022), 1–27. https://doi.org/10.1145/3474121
  93. Francesco Sovrano and Fabio Vitali. 2021. From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein’s Theory of Explanation. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 81–91. https://doi.org/10.1145/3397481.3450655
  94. Investigating Explainability of Generative AI for Code through Scenario-based Design. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 212–228. https://doi.org/10.1145/3490099.3511119
  95. Interpretability of Deep Learning Models : A Survey of Results. In Chakraborty, Supriyo, et al. ”Interpretability of deep learning models: A survey of results.” 2017 IEEE smartworld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, Internet.
  96. Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
  97. Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, Daegu, Korea (South), 249–257. https://doi.org/10.1109/HRI.2019.8673104
  98. Nava Tintarev. 2007. Explanations of recommendations. In Proceedings of the 2007 ACM conference on Recommender systems. 203–206.
  99. Erico Tjoa and Cuntai Guan. 2020. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems 32, 11 (2020), 4793–4813.
  100. Can Games Be AI Explanations? An Exploratory Study of Simulation Games. In Proceedings of the Digital Games Research Association (DiGRA) International Conference.
  101. ” I Want To See How Smart This AI Really Is”: Player Mental Model Development of an Adversarial AI Player. Proceedings of the ACM on Human-Computer Interaction 6, CHI PLAY (2022), 1–26.
  102. Giulia Vilone and Luca Longo. 2020. Explainable artificial intelligence: a systematic review. arXiv preprint arXiv:2006.00093 (2020).
  103. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–15. https://doi.org/10.1145/3290605.3300831
  104. Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 318–328. https://doi.org/10.1145/3397481.3450650
  105. Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–28. https://doi.org/10.1145/3491102.3517551
  106. Oyindamola Williams. 2021. Towards Human-Centred Explainable AI: A Systematic Literature Review. (2021). https://doi.org/10.13140/RG.2.2.27885.92645 Publisher: Unpublished.
  107. Christine T. Wolf. 2019. Explainability scenarios: towards scenario-based XAI design. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, 252–257. https://doi.org/10.1145/3301275.3302317
  108. Interactive visualizer to facilitate game designers in understanding machine learning. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1–6.
  109. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Conference on Human Factors in Computing Systems - Proceedings 4 (2020). https://doi.org/10.1145/3313831.3376301
  110. Wencan Zhang and Brian Y Lim. 2022. Towards Relatable Explainable AI with the Perceptual Process. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–24. https://doi.org/10.1145/3491102.3501826
  111. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. In 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1–8.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets