Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Determinants of LLM-assisted Decision-Making (2402.17385v1)

Published 27 Feb 2024 in cs.AI and cs.HC

Abstract: Decision-making is a fundamental capability in everyday life. LLMs provide multifaceted support in enhancing human decision-making processes. However, understanding the influencing factors of LLM-assisted decision-making is crucial for enabling individuals to utilize LLM-provided advantages and minimize associated risks in order to make more informed and better decisions. This study presents the results of a comprehensive literature analysis, providing a structural overview and detailed analysis of determinants impacting decision-making with LLM support. In particular, we explore the effects of technological aspects of LLMs, including transparency and prompt engineering, psychological factors such as emotions and decision-making styles, as well as decision-specific determinants such as task difficulty and accountability. In addition, the impact of the determinants on the decision-making process is illustrated via multiple application scenarios. Drawing from our analysis, we develop a dependency framework that systematizes possible interactions in terms of reciprocal interdependencies between these determinants. Our research reveals that, due to the multifaceted interactions with various determinants, factors such as trust in or reliance on LLMs, the user's mental model, and the characteristics of information processing are identified as significant aspects influencing LLM-assisted decision-making processes. Our findings can be seen as crucial for improving decision quality in human-AI collaboration, empowering both users and organizations, and designing more effective LLM interfaces. Additionally, our work provides a foundation for future empirical investigations on the determinants of decision-making assisted by LLMs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (211)
  1. R. Ackerman. Heuristic cues for meta-reasoning judgments: Review and methodology. Psychological Topics, 28, 05 2019.
  2. R. Ackerman and V. Thompson. Meta-reasoning: Monitoring and control of thinking and reasoning. Trends in Cognitive Sciences, 21, 06 2017.
  3. A. Adadi and M. Berrada. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, PP:1–1, 09 2018.
  4. Fairness and explanation in AI-informed decision making. Machine Learning and Knowledge Extraction, 4(2):556–579, 2022.
  5. S. Arango-Muñoz. Scaffolded memory and metacognitive feelings. Review of Philosophy and Psychology, 4, 03 2013.
  6. D. Arnott. Cognitive biases and decision support systems development: A design science approach. Inf. Syst. J., 16:55–78, 01 2006.
  7. A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023.
  8. Beyond accuracy: The role of mental models in human-AI team performance. In AAAI Conference on Human Computation & Crowdsourcing, 2019.
  9. Does the whole exceed its parts? the effect of AI explanations on complementary team performance. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2020.
  10. The weight of organizational factors on heuristics: Evidence from triage decision-making processes. Management Decision, 57, 01 2019.
  11. D. Batory. Feature models, grammars, and propositional formulas. pages 7–20, 09 2005.
  12. Expl(AI)ned: The impact of explainable artificial intelligence on users’ information processing. Information Systems Research, 03 2023.
  13. Toward a better understanding of the influences on physical activity: the role of determinants, correlates, causal variables, mediators, moderators, and confounders. American journal of preventive medicine, 23 2 Suppl:5–14, 2002.
  14. A contingency model for the selection of decision strategies. Academy of Management Review, 3:439–449, 1978.
  15. S. C. Bellini-Leite. Dual process theory: Embodied and predictive; symbolic and classical. Frontiers in Psychology, 13:1–11, 2022.
  16. Analogy generation by prompting large language models: A case study of InstructGPT. arXiv preprint arXiv:2210.04186, 2022.
  17. What makes a process a capability? heuristics, strategy, and effective capture of opportunities. Strategic Entrepreneurship Journal, 1:27 – 47, 11 2007.
  18. Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports, 11, 2021.
  19. Six Challenges for Human-AI Co-learning, pages 572–589. 06 2019.
  20. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  21. To trust or to think. Proceedings of the ACM on Human-Computer Interaction, 5:1 – 21, 2021.
  22. S. Cao and C.-M. Huang. Understanding user reliance on AI in assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 6:1 – 23, 2022.
  23. Dimensions of decision-making: An evidence-based classification of heuristics and biases. Personality and Individual Differences, 146:188–200, 2019.
  24. A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology, 2023.
  25. Skills-in-context prompting: Unlocking compositionality in large language models. arXiv preprint arXiv:2308.00304, 2023.
  26. Understanding the role of human intuition on reliance in human-ai decision-making with explanations. Proceedings of the ACM on Human-computer Interaction, 7(CSCW2):1–32, 2023.
  27. The emergence of economic rationality of GPT. arXiv preprint arXiv:2305.12763, 2023.
  28. T. J. Chermack. Mental models in decision making and implications for human resource development. Advances in Developing Human Resources, 5(4):408–422, 2003.
  29. Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice. Computers in Human Behavior, 127:107018, 2022.
  30. A. Choudhury and H. Shamszare. Investigating the impact of user trust on the adoption and use of ChatGPT: Survey analysis. Journal of Medical Internet Research, 25:e47184, 2023.
  31. PaLM: Scaling language modeling with pathways. J. Mach. Learn. Res., 24:240:1–240:113, 2022.
  32. G. Clore. Cognitive phenomenology: Feelings and the construction of judgment. 01 1992.
  33. Affective causes and consequences of social information processing, pages 323–417. 03 1994.
  34. P. Croskerry. Cognitive forcing strategies in clinical decisionmaking. Annals of Emergency Medicine, 41(1):110–120, 2003.
  35. Deciding about fast and slow decisions. Academic medicine : journal of the Association of American Medical Colleges, 89, 12 2013.
  36. K. Daniel. Thinking, fast and slow. 2017.
  37. The maximization paradox: The costs of seeking alternatives. Personality and Individual Differences, 46(5-6):631–635, 2009.
  38. Viewing systematic reviews and meta-analysis in social research through different lenses. SpringerPlus, 3:511, 09 2014.
  39. Dorsch. Verhaltensdeterminante. In Dorsch - Lexikon der Psychologie, 2024.
  40. Artificial intelligence for decision making in the era of big data – evolution, challenges and research agenda. International Journal of Information Management, 48:63–71, 2019.
  41. J. Dunn and M. Schweitzer. Feeling and believing: the influence of emotion on trust. Journal of personality and social psychology, 88 5:736–48, 2003.
  42. M. Easterby-Smith. The design, analysis and interpretation of repertory grids. International Journal of Man-Machine Studies, 13(1):3–24, 1980.
  43. A. Elkins and D. Derrick. The sound of trust: Voice as a measurement of trust during interactions with embodied conversational agents. Group Decis Negot, 22:897–913, 09 2013.
  44. J. S. B. T. Evans. Intuition and reasoning: A dual-process perspective. Psychological Inquiry, 21(4):313–326, 2010.
  45. M. Eysenck and M. Keane. Cognitive Psychology: A Student’s Handbook. 03 2020.
  46. Oops, scratch that! monitoring one’s own errors during mental calculation. Cognition, 146:110–120, 2016.
  47. L. Festinger. A Theory of Cognitive Dissonance. Mass communication series. Stanford University Press, 1962.
  48. Metacognition: Monitoring and Controlling One’s Own Knowledge, Reasoning and Decisions, pages 89–111. 04 2019.
  49. Who goes first? influences of human-AI workflow on decision making in clinical imaging, 2022.
  50. J. Forgas. Mood and judgment: The affect infusion model (AIM). Psychological bulletin, 117:39–66, 02 1995.
  51. J. Forgas and S. Moylan. After the movies: Transient mood and social judgments. Personality and Social Psychology Bulletin, 13:467–477, 12 1987.
  52. J. Forrester. Industrial Dynamics. System dynamics series. Pegasus Communications, 1999.
  53. I. Gabriel. Artificial intelligence, values, and alignment. Minds and Machines, 30:411–437, 09 2020.
  54. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524, 2023.
  55. M. Gary and R. Wood. Unpacking mental models through laboratory experiments. System Dynamics Review, 32:99–127, 04 2016.
  56. Dual-Process Theories. 06 2021.
  57. Mental models of AI agents in a cooperative game setting. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020.
  58. L. G. Giray. Prompt engineering with ChatGPT: A guide for academic writers. Annals of biomedical engineering, 2023.
  59. Ideas are dimes a dozen: Large language models for idea generation in innovation. SSRN Electronic Journal, 01 2023.
  60. E. Glikson and A. Woolley. Human trust in artificial intelligence: Review of empirical research. academy of management annals (in press). The Academy of Management Annals, 04 2020.
  61. R. Gozalo-Brizuela and E. C. Garrido-Merchan. Chatgpt is not all you need. a state of the art review of large generative ai models. arXiv preprint arXiv:2301.04655, 2023.
  62. Mental models and expectation violations in conversational AI interactions. Decision Support Systems, 144:113515, 2021.
  63. Are we ready for artificially intelligent leaders? a comparative analysis of employee perceptions regarding artificially intelligent and human supervisors. In B. B. Anderson, J. Thatcher, R. D. Meservy, K. Chudoba, K. J. Fadel, and S. B. 0001, editors, 26th Americas Conference on Information Systems, AMCIS 2020, Virtual Conference, August 15-17, 2020. Association for Information Systems, 2020.
  64. Large language model based multi-agents: A survey of progress and challenges. arXiv preprint arXiv:2402.01680, 2024.
  65. Eight problems with literature reviews and how to fix them. Nature Ecology & Evolution, 4:1582 – 1589, 2020.
  66. In Large Language Models: A Comprehensive Survey of its Applications, Challenges, Limitations, and Future Prospects, 07 2023.
  67. LLM multi-agent systems: Challenges and open problems. arXiv preprint arXiv:2402.03578, 2024.
  68. AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25:89–100, 03 2020.
  69. Can you trust your robot? Ergonomics in Design, 19(3):24–29, 2011.
  70. T. Händler. Balancing autonomy and alignment: A multi-dimensional taxonomy for autonomous LLM-powered multi-agent architectures. arXiv preprint arXiv:2310.03659, 2023.
  71. R. Hardin. Trust. Key Concepts. Wiley, 2006.
  72. H. Hassani and E. S. Silva. The role of ChatGPT in data science: How AI-assisted conversational interfaces are revolutionizing the field. Big Data and Cognitive Computing, 7(2), 2023.
  73. The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior, 116, 12 2020.
  74. K. Hoff and M. Bashir. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors The Journal of the Human Factors and Ergonomics Society, 57:407–434, 05 2015.
  75. K. Holstein and V. Aleven. Designing for human-AI complementarity in K-12 education. AI Mag., 43:239–248, 2021.
  76. E. F. Holton III. The mandate for theory in human resource development, 2002.
  77. The importance of mental models in implementation science. Frontiers in Public Health, 9:680316, 2021.
  78. The effect of emotion and time pressure on risk decision-making. Journal of Risk Research, 18:1–14, 12 2014.
  79. Understanding the role of computer-mediated counter-argument in countering confirmation bias. Decision Support Systems, 53(3):438–447, 2012.
  80. Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022.
  81. Can large language models explain themselves? a study of LLM-generated self-explanations. arXiv preprint arXiv:2310.11207, 2023.
  82. J. Jabes. Individual processes in organizational behavior. Organizational behavior series. AHM Publ. Corp., Arlington Heights, Ill., 1978.
  83. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2020.
  84. Intelligent Techniques for Decision Support System in Human Resource Management. 03 2010.
  85. AI alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852, 2023.
  86. Time-LLM: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728, 2023.
  87. CGMI: Configurable general multi-agent interaction framework. arXiv preprint arXiv:2308.12503, 2023.
  88. P. N. Johnson-Laird. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Harvard University Press, USA, 1986.
  89. Challenges and applications of large language models. arXiv preprint arXiv:2307.10169, 2023.
  90. Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press, 1982.
  91. Feature-oriented domain analysis (FODA) feasibility study. 01 1990.
  92. Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2):337–359, 2023. PMID: 34048287.
  93. ChatGPT for good? on opportunities and challenges of large language models for education. Learning and Individual Differences, 103:102274, 2023.
  94. Trustworthy artificial intelligence: A review. ACM Computing Surveys (CSUR), 55:1 – 38, 2022.
  95. Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. pages 1–14, 04 2020.
  96. Capturing humans’ mental models of AI: An item response theory approach. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023.
  97. Evaluating language-model agents on realistic autonomous tasks. arXiv preprint arXiv:2312.11671, 2023.
  98. G. Klein and R. Hoffman. Macrocognition, mental models, and cognitive task analysis methodology. Naturalistic Decision Making and Macrocognition, pages 57–80, 01 2008.
  99. S. Komiak and I. Benbasat. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly, 30:941–960, 12 2006.
  100. Internal and external integration for product development: The contingency effect of uncertainty, equivocality, and platform strategy. Decision Sciences, 36:97 – 133, 01 2005.
  101. Tell me more? the effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, page 1–10, New York, NY, USA, 2012. Association for Computing Machinery.
  102. Too much, too little, or just right? ways explanations impact end users’ mental models. 09 2013.
  103. Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154, 2017.
  104. Dual-process cognitive interventions to enhance diagnostic reasoning: A systematic review. BMJ quality & safety, 25, 02 2016.
  105. J. Lambert and M. Stevens. ChatGPT and generative AI technology: A mixed bag of concerns and new opportunities. Computers in the Schools, 0(0):1–25, 2023.
  106. Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of Human Factors and Ergonomics Society, 46:50 – 80, 2004.
  107. Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499, 2021.
  108. Brokerbot: A cryptocurrency chatbot in the social-technical gap of trust. Computer Supported Cooperative Work (CSCW), 30:1–39, 02 2021.
  109. Emotion and decision making. Annual Review of Psychology, 66(1):799–823, 2015. PMID: 25251484.
  110. Ai transparency in the age of llms: A human-centered research roadmap. arXiv preprint arXiv:2306.01941, 2023.
  111. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Journal of Clinical Epidemiology, 62(10):e1–e34, 2009.
  112. Data-efficient fine-tuning for llm-based recommendation. arXiv preprint arXiv:2401.17197, 2024.
  113. AgentBench: Evaluating LLMs as agents. arXiv preprint arXiv:2308.03688, 2023.
  114. Trustworthy llms: a survey and guideline for evaluating large language models’ alignment. arXiv preprint arXiv:2308.05374, 2023.
  115. The effects of emotions and task frames on risk preferences in self decision making and anticipating others’ decisions: The effects of emotions and task frames on risk preferences in self decision making and anticipating others’ decisions. Acta Psychologica Sinica, 42:317–324, 03 2010.
  116. T. Lombrozo. The structure and function of explanations. Trends in Cognitive Sciences, 10(10):464–470, Oct. 2006.
  117. Does more advice help? the effects of second opinions in AI-assisted decision making. arXiv preprint arXiv:2401.07058, 2024.
  118. Trust in artificial intelligence: From a foundational trust framework to emerging research opportunities. Electronic Markets, 32:3, 11 2022.
  119. Who should i trust: AI or myself? leveraging human and AI correctness likelihood to promote appropriate trust in AI-assisted decision-making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA, 2023. Association for Computing Machinery.
  120. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113:103655, 2021.
  121. M. Mather. A review of decision-making processes: Weighing the risks and benefits of aging. 2006.
  122. The contingency model for the selection of decision strategies: An empirical test of the effects of significance, accountability, and reversibility. Organizational Behavior and Human Performance, 24(2):228–244, 1979.
  123. Avoiding bias in selecting studies. 2013.
  124. B. Meskó. Prompt engineering as an important emerging skill for medical professionals: Tutorial. J Med Internet Res, 25:e50638, Oct 2023.
  125. S. Mir and S. Quadri. Decision Support Systems: Concepts, Progress and Issues – A Review, pages 373–399. 01 1970.
  126. The decision making tendency inventory: A new measure to assess maximizing, satisficing, and minimizing. Personality and Individual Differences, 85:111–116, 2015.
  127. Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia Medica, 27, 2017.
  128. S. Mohanty and D. Suar. Decision making under uncertainty and information processing in positive and negative mood states. Psychological reports, 115:91–105, 08 2014.
  129. Preferred reporting items for systematic reviews and meta-analyses: The prisma statement. International Journal of Surgery, 8(5):336–341, 2010.
  130. Machine learning explanations to prevent overtrust in fake news detection. In Proceedings of the international AAAI conference on web and social media, volume 15, pages 421–431, 2021.
  131. Decision making: a theoretical review. Integrative Psychological and Behavioral Science, 56, 09 2022.
  132. E. Moyano Díaz and R. Mendoza Llanos. Yes! maximizers maximize almost everything: The decision-making style is consistent in different decision domains. Frontiers in Psychology, 12, 07 2021.
  133. D. Moynihan and S. Lavertu. Cognitive biases in governing: Technology preferences in election administration. PSN: Electoral Administration (Topic), 72, 06 2010.
  134. V. Nadurak. Dual-process theory and two types of metacognitive monitoring and control processes. Integrative Psychological and Behavioral Science, 04 2023.
  135. D. Navon and D. Gopher. On the economy of the human processing system: A model of multiple capacity. 1977.
  136. W. D. Neys. On dual- and single-process models of thinking. Perspectives on Psychological Science, 16(6):1412–1427, 2021. PMID: 33621468.
  137. Can we make sense of the notion of trustworthy technology? Knowledge, Technology & Policy, 23:429–444, 12 2010.
  138. R. S. Nickerson. Confirmation bias: A ubiquitous phenomenon in many guises. Review of general psychology, 2(2):175–220, 1998.
  139. D. Norman. The Psychology of Everyday Things. Basic Books, New York, 1988.
  140. The role of domain expertise in user trust and the impact of first impressions with intelligent systems. In AAAI Conference on Human Computation & Crowdsourcing, 2020.
  141. Anchoring bias affects mental model formation and user reliance in explainable ai systems. 26th International Conference on Intelligent User Interfaces, 2021.
  142. Object Management Group. Unified Modeling Language – version 2.5.1. https://www.omg.org/spec/UML/2.5.1, Dec. 2017.
  143. Revisiting prompt engineering via declarative crowdsourcing. arXiv preprint arXiv:2308.03854, 2023.
  144. Social Simulacra: Creating populated prototypes for social computing systems. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pages 1–18, 2022.
  145. AI deception: A survey of examples, risks, and potential solutions. arXiv preprint arXiv:2308.14752, 2023.
  146. A. Parkes. The effect of individual and task characteristics on decision aid reliance. Behaviour & Information Technology, 36:165 – 177, 2017.
  147. S. Pathak and K. B. L. Srivastava. Effect of emotion in information processing for decision-making. 2020.
  148. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022.
  149. G. Phillips-Wren. Intelligent Decision Support Systems, pages 25–44. 02 2013.
  150. Communicative agents for software development. arXiv preprint arXiv:2307.07924, 2023.
  151. N. Rane. ChatGPT and similar generative artificial intelligence (AI) for building and construction industry: Contribution, opportunities and challenges of large language models for industry 4.0, industry 5.0, and society 5.0. Opportunities and Challenges of Large Language Models for Industry, 4, 2023.
  152. Deciding fast and slow: The role of cognitive biases in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 6:1 – 22, 2020.
  153. J. Razmak and B. Aouni. Decision support system and multi-criteria decision aid: a state of the art and perspectives. Journal of Multi-Criteria Decision Analysis, 22(1-2):101–117, 2015.
  154. B. Rehder. Categorization as causal reasoning. Cognitive Science, 27(5):709–748, 2003.
  155. Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence, 2018.
  156. N. Rogge. Exploring maximizing, satisficing and minimizing tendency in decision-making among autistic and neurotypical individuals. Research in Autism Spectrum Disorders, 92:101935, 04 2022.
  157. S. Russell. Human compatible: Artificial intelligence and the problem of control. Penguin, 2019.
  158. S. K. K. Santu and D. Feng. Teler: A general taxonomy of llm prompts for benchmarking complex tasks. arXiv preprint arXiv:2305.11430, 2023.
  159. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
  160. Trust and reliance in xai–distinguishing between attitudinal and behavioral measures. arXiv preprint arXiv:2203.12318, 2022.
  161. Should i follow ai-based advice? measuring appropriate reliance in human-ai decision-making. arXiv preprint arXiv:2204.06916, 2022.
  162. S. Schnall. Affect, mood and emotions. Social and emotional aspects of learning, 59, 2011.
  163. M. Shanahan. Talking about large language models. Communications of the ACM, 67(2):68–79, 2024.
  164. Large language model alignment: A survey. arXiv preprint arXiv:2309.15025, 2023.
  165. ChatGPT and other large language models are double-edged swords. Radiology, 307, 01 2023.
  166. N. Shepherd and J. Rudd. The influence of context on the strategic decision-making process: A review of the literature. International Journal of Management Reviews, 16, 12 2013.
  167. D. D. Shin. How do users interact with algorithm recommender systems? the interaction of users, algorithms, and performance. Comput. Hum. Behav., 109:106344, 2020.
  168. D. D. Shin. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. Int. J. Hum. Comput. Stud., 146:102551, 2021.
  169. H. Simon. The New Science of Management Decision. Prentice-Hall, 1977.
  170. H. Simon. Administrative Behavior, 4th Edition. Free Press, 1997.
  171. H. Snyder. Literature review as a research methodology: An overview and guidelines. Journal of Business Research, 104:333–339, 2019.
  172. M. Steyvers and A. Kumar. Three challenges for ai-assisted decision-making. Perspectives on psychological science : a journal of the Association for Psychological Science, page 17456916231181102, 07 2023.
  173. Systematic biases in LLM simulations of debates. arXiv preprint arXiv:2402.04049, 2024.
  174. Ai-assisted decision-making: a cognitive modeling approach to infer latent reliance strategies. Computational Brain & Behavior, 5, 10 2022.
  175. Large language models in medicine. Nature Medicine, 29, 07 2023.
  176. V. Thompson and K. Morsanyi. Analytic thinking: Do you feel like it? Mind & Society, 11, 06 2012.
  177. 14 meta-reasoning: Monitoring and control of reasoning, decision making, and problem solving. Cognitive unconscious and human rationality, page 275, 2016.
  178. P. A. Todd and I. Benbasat. Evaluating the impact of dss, cognitive effort, and incentives on strategy selection. Inf. Syst. Res., 10:356–374, 1999.
  179. R. J. Torraco. Writing integrative literature reviews: Guidelines and examples. Human resource development review, 4(3):356–367, 2005.
  180. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting. arXiv preprint arXiv:2305.04388, 2023.
  181. A. Tversky and D. Kahneman. Judgment under uncertainty: Heuristics and biases. Science, 185(4157):1124–1131, 1974.
  182. N. Urbach and M. Roeglinger. Introduction to Digitalization Cases: How Organizations Rethink Their Business for the Digital Age, pages 1–12. 01 2019.
  183. K. van Dongen and P.-P. van Maanen. A framework for explaining reliance on decision aids. International Journal of Human-Computer Studies, 71(4):410–424, 2013.
  184. Explanations can reduce overreliance on AI systems during decision-making. Proceedings of the ACM on Human-Computer Interaction, 7:1 – 38, 2022.
  185. E. Vorm and D. J. Combs. Integrating transparency, trust, and acceptance: The intelligent systems technology acceptance model (ISTAM). International Journal of Human–Computer Interaction, 38(18-20):1828–1845, 2022.
  186. J. Vrabel and V. Zeigler-Hill. Conscious vs. Unconscious Determinants of Behavior. 01 2017.
  187. S. Wang and V. A. Thompson. Fluency and feeling of rightness. Psihologijske teme, 2019.
  188. P. C. Wason. On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12:129 – 140, 1960.
  189. The role of social comparison for maximizers and satisficers: Wanting the best or wanting to be the best? Journal of Consumer Psychology, 25(3):372–388, 2015.
  190. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022.
  191. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022.
  192. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021.
  193. Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 214–229, 2022.
  194. A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382, 2023.
  195. Multimodal large language models: A survey. In 2023 IEEE International Conference on Big Data (BigData), pages 2247–2256, 2023.
  196. Eva-kellm: A new benchmark for evaluating knowledge editing of llms. arXiv preprint arXiv:2308.09954, 2023.
  197. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New York, NY, USA, 2022. Association for Computing Machinery.
  198. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023.
  199. Hallucination is inevitable: An innate limitation of large language models. arXiv preprint arXiv:2401.11817, 2024.
  200. Large language models as optimizers. arXiv preprint arXiv:2309.03409, 2023.
  201. Llm lies: Hallucinations are not bugs, but features as adversarial examples. arXiv preprint arXiv:2310.01469, 2023.
  202. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36, 2024.
  203. A survey on multimodal large language models. arXiv preprint arXiv:2306.13549, 2023.
  204. L. Yu and Y. Li. Artificial intelligence decision-making transparency and employees’ trust: The parallel multiple mediating effect of effectiveness and discomfort. Behavioral sciences (Basel, Switzerland), 12, 04 2022.
  205. Why johnny can’t prompt: How non-ai experts try (and fail) to design llm prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA, 2023. Association for Computing Machinery.
  206. M. Zeelenberg. Anticipated regret, expected feedback and behavioral decision-making. Journal of Behavioral Decision Making, 12:93–106, 1999.
  207. How transparency modulates trust in artificial intelligence. Patterns, 3(4):100455, 2022.
  208. Exploring collaboration mechanisms for LLM agents: A social psychology view. arXiv preprint arXiv:2310.02124, 2023.
  209. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020.
  210. Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology, 2023.
  211. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Eva Eigner (1 paper)
  2. Thorsten Händler (2 papers)
Citations (26)