Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Advancing Explainable AI Toward Human-Like Intelligence: Forging the Path to Artificial Brain (2402.06673v1)

Published 7 Feb 2024 in cs.AI

Abstract: The intersection of AI and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes. This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches, and delves into their applications in diverse domains, including healthcare and finance. The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed. The paper further investigates the potential convergence of XAI with cognitive sciences, the development of emotionally intelligent AI, and the quest for Human-Like Intelligence (HLI) in AI systems. As AI progresses towards AGI, considerations of consciousness, ethics, and societal impact become paramount. The ongoing pursuit of deciphering the mysteries of the brain with AI and the quest for HLI represent transformative endeavors, bridging technical advancements with multidisciplinary explorations of human cognition.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (123)
  1. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp.  308–318, 2016.
  2. Cogam: measuring and moderating cognitive load in machine learning model explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp.  1–14, 2020.
  3. From attribution maps to human-understandable explanations through concept relevance propagation. Nature Machine Intelligence, 5(9):1006–1019, 2023.
  4. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access, 6:52138–52160, 2018.
  5. Local explanation methods for deep neural networks lack sensitivity to parameter values. arXiv preprint arXiv:1810.03307, 2018a.
  6. Sanity checks for saliency maps. Advances in neural information processing systems, 31, 2018b.
  7. Explainable artificial intelligence (xai): What we know and what is left to attain trustworthy artificial intelligence. Information Fusion, 99:101805, 2023.
  8. On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049, 2018.
  9. Towards explainable deep neural networks (xdnn). Neural Networks, 130:185–194, 2020.
  10. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115, 2020.
  11. Self-supervised learning from images with a joint-embedding predictive architecture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  15619–15629, 2023.
  12. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
  13. How to explain individual classification decisions. The Journal of Machine Learning Research, 11:1803–1831, 2010.
  14. Dalex: responsible machine learning with interactive explainability and fairness in python. Journal of Machine Learning Research, 22(214):1–7, 2021.
  15. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences, 117(48):30071–30078, 2020.
  16. Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943, 2018.
  17. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp.  610–623, 2021.
  18. Deep learning for ai. Communications of the ACM, 64(7):58–65, 2021.
  19. Prototype selection for interpretable classification. 2011.
  20. Bishop, C. Pattern recognition and machine learning. Springer google schola, 2:531–537, 2006.
  21. Slisemap: Supervised dimensionality reduction through local explanations. Machine Learning, 112(1):1–43, 2023.
  22. Bostrom, N. Superintelligence: Paths, dangers, strategies. OUP Oxford, 2014.
  23. Breiman, L. Random forests. Machine learning, 45:5–32, 2001.
  24. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  25. A review of trustworthy and explainable artificial intelligence (xai). IEEE Access, 2023.
  26. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV), pp.  839–847. IEEE, 2018.
  27. What makes a good conversation? challenges in designing truly conversational agents. In Proceedings of the 2019 CHI conference on human factors in computing systems, pp.  1–12, 2019.
  28. Dean, J. 1.1 the deep learning revolution and its implications for computer architecture and chip design. In 2020 IEEE International Solid-State Circuits Conference-(ISSCC), pp.  8–14. IEEE, 2020.
  29. Dehaene, S. Consciousness and the brain: Deciphering how the brain codes our thoughts. Penguin, 2014.
  30. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021.
  31. Why model why? assessing the strengths and limitations of lime. arXiv preprint arXiv:2012.00093, 2020.
  32. Dignum, V. Responsible artificial intelligence: how to develop and use AI in a responsible way, volume 2156. Springer, 2019.
  33. Donoghue, J. P. Connecting cortex to machines: recent advances in brain interfaces. Nature neuroscience, 5(Suppl 11):1085–1088, 2002.
  34. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017.
  35. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp.  1–19, 2021.
  36. Fetz, E. E. Volitional control of neural activity: implications for brain–computer interfaces. The Journal of physiology, 579(3):571–579, 2007.
  37. Machine learning for cultural heritage: A survey. Pattern Recognition Letters, 133:102–108, 2020.
  38. Friston, K. J. Models of brain function in neuroimaging. Annu. Rev. Psychol., 56:57–87, 2005.
  39. Gamez, D. Measuring intelligence in natural and artificial systems. Journal of Artificial Intelligence and Consciousness, 8(02):285–302, 2021.
  40. Human-centric ai for trustworthy iot systems with explainable multilayer perceptrons. IEEE Access, 7:125562–125574, 2019.
  41. Gerrish, S. How smart machines think. MIT Press, 2018.
  42. Bridging the gap between mechanistic biological models and machine learning surrogates. PLoS Computational Biology, 19(4):e1010988, 2023.
  43. Towards automatic concept-based explanations. Advances in neural information processing systems, 32, 2019.
  44. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pp.  80–89. IEEE, 2018.
  45. Goleman, D. Emotional intelligence. Bloomsbury Publishing, 2020.
  46. Graziano, M. S. The attention schema theory: A foundation for engineering artificial consciousness. Frontiers in Robotics and AI, 4:60, 2017.
  47. Xai—explainable artificial intelligence. Science robotics, 4(37):eaay7120, 2019.
  48. Neuroscience-inspired artificial intelligence. Neuron, 95(2):245–258, 2017.
  49. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  770–778, 2016.
  50. Support vector machines. IEEE Intelligent Systems and their applications, 13(4):18–28, 1998.
  51. Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond. Journal of Machine Learning Research, 24(34):1–11, 2023.
  52. Mindful explanations: Prevalence and impact of mind attribution in xai research. arXiv preprint arXiv:2312.12119, 2023.
  53. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
  54. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  55. What do we need to build explainable ai systems for the medical domain? arXiv preprint arXiv:1712.09923, 2017.
  56. Artificial intelligence, machine learning, and autonomous technologies in mining industry. Journal of Database Management (JDM), 30(2):67–79, 2019.
  57. Global explanations of neural networks: Mapping the landscape of predictions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp.  279–287, 2019.
  58. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness? arXiv preprint arXiv:2004.03685, 2020.
  59. Evaluating differentially private machine learning in practice. In 28th USENIX Security Symposium (USENIX Security 19), pp.  1895–1912, 2019.
  60. Spinalnet: Deep neural network with gradual input. IEEE Transactions on Artificial Intelligence, 2022.
  61. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pp.  2668–2677. PMLR, 2018.
  62. Kriegeskorte, N. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual review of vision science, 1:417–446, 2015.
  63. Let’s go to the alien zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning. Frontiers in Computer Science, 5:20, 2023.
  64. Building machines that learn and think like people. Behavioral and brain sciences, 40:e253, 2017.
  65. Deep learning. nature, 521(7553):436–444, 2015.
  66. Differentially private meta-learning. arXiv preprint arXiv:1909.05830, 2019.
  67. Linzen, T. How can we accelerate progress towards human-like linguistic generalization? arXiv preprint arXiv:2005.00955, 2020.
  68. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
  69. Luxton, D. D. Artificial intelligence in psychological practice: Current and future applications and implications. Professional Psychology: Research and Practice, 45(5):332, 2014.
  70. Toward an integration of deep learning and neuroscience. Frontiers in computational neuroscience, 10:94, 2016.
  71. Marcus, G. Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631, 2018.
  72. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  7210–7219, 2021.
  73. Designing emotionally sentient agents. Communications of the ACM, 61(12):74–83, 2018.
  74. The risks associated with artificial general intelligence: A systematic review. Journal of Experimental & Theoretical Artificial Intelligence, 35(5):649–663, 2023.
  75. McStay, A. Emotional ai: The rise of empathic media. Emotional AI, pp.  1–248, 2018.
  76. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6):1–35, 2021.
  77. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021.
  78. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1–38, 2019.
  79. Mittelstadt, B. Principles alone cannot guarantee ethical ai. Nature machine intelligence, 1(11):501–507, 2019.
  80. Scalable and energy-efficient nn acceleration with gpu-reram architecture. In International Symposium on Applied Reconfigurable Computing, pp.  230–244. Springer, 2023.
  81. Brain-machine interfaces beyond neuroprosthetics. Neuron, 86(1):55–67, 2015.
  82. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision, pp.  1520–1528, 2015.
  83. Peterson, L. E. K-nearest neighbor. Scholarpedia, 4(2):1883, 2009.
  84. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  10772–10781, 2019.
  85. Does the chimpanzee have a theory of mind? Behavioral and brain sciences, 1(4):515–526, 1978.
  86. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing, 214:242–268, 2016.
  87. Quinlan, J. R. et al. Bagging, boosting, and c4. 5. In Aaai/Iaai, vol. 1, pp.  725–730. Citeseer, 1996.
  88. Machine behaviour. Nature, 568(7753):477–486, 2019.
  89. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp.  1135–1144, 2016.
  90. Riek, L. D. Healthcare robotics. Communications of the ACM, 60(11):68–78, 2017.
  91. Rish, I. et al. An empirical study of the naive bayes classifier. In IJCAI 2001 workshop on empirical methods in artificial intelligence, volume 3, pp.  41–46, 2001.
  92. Decoding the brain: Neural representation and the limits of multivariate pattern analysis in cognitive neuroscience. The British journal for the philosophy of science, 2019.
  93. Russell, S. Human compatible: Artificial intelligence and the problem of control. Penguin, 2019.
  94. Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263:110273, 2023.
  95. Comparative analysis of recent architecture of convolutional neural network. Mathematical Problems in Engineering, 2022, 2022.
  96. Explainable AI: interpreting, explaining and visualizing deep learning, volume 11700. Springer Nature, 2019.
  97. What is missing in xai so far? an interdisciplinary perspective. KI-Künstliche Intelligenz, 36(3-4):303–315, 2022.
  98. Gender and the media: Women’s places. Emerald Publishing Limited, 2018.
  99. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp.  618–626, 2017.
  100. Shneiderman, B. Human-centered AI. Oxford University Press, 2022.
  101. Learning important features through propagating activation differences. In International conference on machine learning, pp.  3145–3153. PMLR, 2017.
  102. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  103. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
  104. Reliable post hoc explanations: Modeling uncertainty in explainability. Advances in neural information processing systems, 34:9391–9404, 2021.
  105. Toward enriched cognitive learning with xai. arXiv preprint arXiv:2312.12290, 2023.
  106. Axiomatic attribution for deep networks. In International conference on machine learning, pp.  3319–3328. PMLR, 2017.
  107. Advances in neural rendering. In Computer Graphics Forum, volume 41, pp.  703–735. Wiley Online Library, 2022.
  108. Integrated information theory: from consciousness to its physical substrate. Nature Reviews Neuroscience, 17(7):450–461, 2016.
  109. Tsoukalas, I. Theory of mind: towards an evolutionary theory. Evolutionary Psychological Science, 4(1):38–66, 2018.
  110. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017.
  111. Moral machines: Teaching robots right from wrong. Oxford University Press, 2008.
  112. Generative adversarial networks: A survey and taxonomy. arXiv preprint arXiv:1906.01529, 2, 2019.
  113. Synthetic-neuroscore: Using a neuro-ai interface for evaluating generative adversarial networks. Neurocomputing, 405:26–36, 2020.
  114. Advanced graph and sequence neural networks for molecular property prediction and drug discovery. Bioinformatics, 38(9):2579–2586, 2022.
  115. Fairlearn: Assessing and improving fairness of ai systems. arXiv preprint arXiv:2303.16626, 2023.
  116. Wellman, H. M. Making minds: How theory of mind develops. Oxford University Press, 2014.
  117. Attention is not not explanation. arXiv preprint arXiv:1908.04626, 2019.
  118. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3):356–365, 2016.
  119. Explainability in graph neural networks: A taxonomic survey. IEEE transactions on pattern analysis and machine intelligence, 45(5):5782–5799, 2022.
  120. Comparative analytical survey on cognitive agents with emotional intelligence. Cognitive Computation, 14(4):1223–1246, 2022.
  121. Multimodal image synthesis and editing: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  122. Toward the third generation artificial intelligence. Science China Information Sciences, 66(2):121101, 2023.
  123. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  2921–2929, 2016.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com