Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Conversational Agents as an Effective Tool for Measuring Cognitive Biases in Decision-Making (2401.06686v1)

Published 8 Jan 2024 in cs.HC and cs.AI

Abstract: Heuristics and cognitive biases are an integral part of human decision-making. Automatically detecting a particular cognitive bias could enable intelligent tools to provide better decision-support. Detecting the presence of a cognitive bias currently requires a hand-crafted experiment and human interpretation. Our research aims to explore conversational agents as an effective tool to measure various cognitive biases in different domains. Our proposed conversational agent incorporates a bias measurement mechanism that is informed by the existing experimental designs and various experimental tasks identified in the literature. Our initial experiments to measure framing and loss-aversion biases indicate that the conversational agents can be effectively used to measure the biases.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. G. Gigerenzer and H. Brighton, “Homo Heuristicus: Why Biased Minds Make Better Inferences,” Topics in Cognitive Science, vol. 1, no. 1, pp. 107–143, 2009, _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1756-8765.2008.01006.x. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1756-8765.2008.01006.x
  2. D. Kahneman and S. Frederick, “A Model of Heuristic Judgment,” in The Cambridge handbook of thinking and reasoning.   New York, NY, US: Cambridge University Press, 2005, pp. 267–293.
  3. C. R. Carter, L. Kaufmann, and A. Michel, “Behavioral supply management: A taxonomy of judgment and decision-making biases,” International Journal of Physical Distribution & Logistics Management, vol. 37, no. 8, pp. 631–669, Sep. 2007.
  4. M. Weinmann, C. Schneider, and J. Vom Brocke, “Digital nudging,” Business & Information Systems Engineering, vol. 58, no. 6, p. 433–436.
  5. T. Mirsch, C. Lehrer, and R. Jung, “Digital nudging: Altering user behavior in digital environments,” in Proceedings der 13. Internationalen Tagung Wirtschaftsinformatik (WI 2017, p. 634–648.
  6. M. Recasens, C. Danescu-Niculescu-Mizil, and D. Jurafsky, “Linguistic models for analyzing and detecting biased language,” in Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2013, pp. 1650–1659.
  7. E. Ert and I. Erev, “On the descriptive value of loss aversion in decisions under risk: Six clarifications,” Judgment and Decision making, vol. 8, no. 3, pp. 214–235, 2013.
  8. M. McShane, S. Nirenburg, and B. Jarrell, “Modeling decision-making biases,” Biologically Inspired Cognitive Architectures, vol. 3, pp. 39–50, 2013.
  9. A. Gulati, M. A. Lozano, B. Lepri, and N. Oliver, “Biased: Bringing irrationality into automated system design,” arXiv preprint arXiv:2210.01122, 2022.
  10. D. Jung, E. Erdfelder, and F. Glaser, “Nudged to win: Designing robo-advisory to overcome decision inertia,” in Proceedings of the 26th European conference on information systems.   ECIS.
  11. C. Stryja and G. Satzger, “Digital nudging to overcome cognitive resistance in innovation adoption decisions,” The Service Industries Journal, vol. 39, no. 15-16, pp. 1123–1139, 2019.
  12. A. Rieger, M. Theune, and N. Tintarev, “Toward natural language mitigation strategies for cognitive biases in recommender systems,” in 2nd workshop on interactive natural language technology for explainable artificial intelligence, 2020, pp. 50–54.
  13. I. Seeber, E. Bittner, R. O. Briggs, T. De Vreede, G.-J. De Vreede, A. Elkins, R. Maier, A. B. Merz, S. Oeste-Reiß, N. Randrup et al., “Machines as teammates: A research agenda on ai in team collaboration,” Information & management, vol. 57, no. 2, p. 103174, 2020.
  14. J. M. Echterhoff, M. Yarmand, and J. McAuley, “AI-Moderated Decision-Making: Capturing and Balancing Anchoring Bias in Sequential Decision Tasks,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, ser. CHI ’22.   New York, NY, USA: Association for Computing Machinery, Apr. 2022, pp. 1–9. [Online]. Available: https://dl.acm.org/doi/10.1145/3491102.3517443
  15. L. Reicherts, G. W. Park, and Y. Rogers, “Extending chatbots to probe users: Enhancing complex decision-making through probing conversations,” in Proceedings of the 4th Conference on Conversational User Interfaces, 2022, pp. 1–10.
  16. M. Weinmann, C. Schneider, and J. v. Brocke, “Digital nudging,” Business & Information Systems Engineering, vol. 58, pp. 433–436, 2016.
  17. C. Schneider, M. Weinmann, and J. Vom Brocke, “Digital nudging: guiding online user choices through interface design,” Communications of the ACM, vol. 61, no. 7, pp. 67–73, 2018.
  18. T. Mirsch, C. Lehrer, and R. Jung, “Digital nudging: Altering user behavior in digital environments,” Proceedings der 13. Internationalen Tagung Wirtschaftsinformatik (WI 2017), pp. 634–648, 2017.
  19. S. Mills, “Personalized nudging,” Behavioural Public Policy, vol. 6, no. 1, pp. 150–159, 2022.
  20. T. J. Barev, M. Schwede, and A. Janson, “The dark side of privacy nudging–an experimental study in the context of a digital work environment,” in Hawaii International Conference on System Sciences (HICSS), vol. 54, 2021.
  21. C. Gena, P. Grillo, A. Lieto, C. Mattutino, and F. Vernero, “When personalization is not an option: an in-the-wild study on persuasive news recommendation,” Information, vol. 10, no. 10, p. 300, 2019.
  22. K. Momsen and T. Stoerk, “From intention to action: Can nudges help consumers to choose renewable energy?” Energy Policy, vol. 74, pp. 376–382, 2014.
  23. J. Ni, T. Young, V. Pandelea, F. Xue, and E. Cambria, “Recent advances in deep learning based dialogue systems: A systematic survey,” Artificial intelligence review, vol. 56, no. 4, pp. 3055–3155, 2023.
  24. S. Santhanam and S. Shaikh, “A survey of natural language generation techniques with a focus on dialogue systems-past, present and future directions,” arXiv preprint arXiv:1906.00500, 2019.
  25. Y. Zheng, Z. Chen, R. Zhang, S. Huang, X. Mao, and M. Huang, “Stylized dialogue response generation using stylized unpaired texts,” in AAAI, 2020. [Online]. Available: https://arxiv.org/abs/2009.12719
  26. J. Wei, S. Kim, H. Jung, and Y.-H. Kim, “Leveraging large language models to power chatbots for collecting user self-reported data,” arXiv preprint arXiv:2301.05843, 2023.
  27. L. E. Asri, H. Schulz, S. Sharma, J. Zumer, J. Harris, E. Fine, R. Mehrotra, and K. Suleman, “Frames: a corpus for adding memory to goal-oriented dialogue systems,” arXiv preprint arXiv:1704.00057, 2017.
  28. J. L. Nicolau, “Asymmetric tourist response to price: Loss aversion segmentation,” Journal of Travel Research, vol. 51, no. 5, pp. 568–676, 2012.
  29. Q. Nguyen, “Linking loss aversion and present bias with overspending behavior of tourists: Insights from a lab-in-the-field experiment,” Tourism Management, vol. 54, pp. 152–159, 2016.
  30. G. Gigerenzer, “The bias bias in behavioral economics,” Review of Behavioral Economics, vol. 5, no. 3-4, pp. 303–336, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Stephen Pilli (2 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets