Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Generative AI Triggers Welfare-Reducing Decisions in Humans (2401.12773v1)

Published 23 Jan 2024 in econ.GN and q-fin.EC

Abstract: Generative AI is poised to reshape the way individuals communicate and interact. While this form of AI has the potential to efficiently make numerous human decisions, there is limited understanding of how individuals respond to its use in social interaction. In particular, it remains unclear how individuals engage with algorithms when the interaction entails consequences for other people. Here, we report the results of a large-scale pre-registered online experiment (N = 3,552) indicating diminished fairness, trust, trustworthiness, cooperation, and coordination by human players in economic twoplayer games, when the decision of the interaction partner is taken over by ChatGPT. On the contrary, we observe no adverse welfare effects when individuals are uncertain about whether they are interacting with a human or generative AI. Therefore, the promotion of AI transparency, often suggested as a solution to mitigate the negative impacts of generative AI on society, shows a detrimental effect on welfare in our study. Concurrently, participants frequently delegate decisions to ChatGPT, particularly when the AI's involvement is undisclosed, and individuals struggle to discern between AI and human decisions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. Vinuesa, R. et al. The role of artificial intelligence in achieving the sustainable development goals. Nature Communications 11, 233 (2020). [2] Lin, Z. et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 379, 1123–1130 (2023). [3] Peng, C. et al. A study of generative large language model for medical research and healthcare. npj Digital Medicine 6, 210 (2023). [4] Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lin, Z. et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 379, 1123–1130 (2023). [3] Peng, C. et al. A study of generative large language model for medical research and healthcare. npj Digital Medicine 6, 210 (2023). [4] Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Peng, C. et al. A study of generative large language model for medical research and healthcare. npj Digital Medicine 6, 210 (2023). [4] Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  2. Lin, Z. et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 379, 1123–1130 (2023). [3] Peng, C. et al. A study of generative large language model for medical research and healthcare. npj Digital Medicine 6, 210 (2023). [4] Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Peng, C. et al. A study of generative large language model for medical research and healthcare. npj Digital Medicine 6, 210 (2023). [4] Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  3. Peng, C. et al. A study of generative large language model for medical research and healthcare. npj Digital Medicine 6, 210 (2023). [4] Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  4. Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). [5] Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Yan, M., Cerri, G. G. & Moraes, F. Y. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  5. Chatgpt and medicine: how ai language models are shaping the future and health related careers. Nature Biotechnology 41, 1657–1658 (2023). [6] Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  6. Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 0, eadi2336 (forthcoming). [7] Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  7. Bi, K. et al. Accurate medium-range global weather forecasting with 3d neural networks. Nature 619, 533–538 (2023). [8] Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  8. Abramoff, M. D. et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. npj Digital Medicine 6, 184 (2023). [9] Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  9. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023). [10] Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Tricomi, E., Rangel, A., Camerer, C. F. & O’Doherty, J. P. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  10. Neural evidence for inequality-averse social preferences. Nature 463, 1089–1091 (2010). [11] Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  11. The nature of human altruism. Nature 425, 785–791 (2003). [12] Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  12. Social norms and human cooperation. Trends in Cognitive Sciences 8, 185–190 (2004). [13] Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  13. Henrich, J. et al. In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review 91, 73–78 (2001). [14] Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  14. Influencing human-ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 1076–1086 (2023). [15] Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Bauer, K., Liebich, L., Hinz, O. & Kosfeld, M. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  15. Decoding GPT’s hidden “rationality” of cooperation. SSRN Electronic Journal (2023). URL https://www.ssrn.com/abstract=4576036. [16] Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  16. The Emergence of Economic Rationality of GPT (2023). URL http://arxiv.org/abs/2305.12763. ArXiv:2305.12763 [econ, q-fin]. [17] Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  17. Guo, F. GPT Agents in Game Theory Experiments (2023). URL http://arxiv.org/abs/2305.05516. ArXiv:2305.05516 [econ, q-fin]. [18] Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dargnies, M.-P., Hakimov, R. & Kübler, D. F. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  18. Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence. SSRN Electronic Journal (2022). URL https://www.ssrn.com/abstract=4238275. [19] Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Chugunova, M. & Sele, D. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  19. We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022). [20] Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  20. Köbis, N. et al. Artificial intelligence can facilitate selfish decisions by altering the appearance of interaction partners (2023). URL https://doi.org/10.48550/arXiv.2306.04484. arXiv:2306.04484. [21] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  21. Rise of the machines: Delegating decisions to autonomous ai. Computers in Human Behavior 134, 107308 (2022). [22] von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). von Schenk, A., Klockmann, V. & Köbis, N. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  22. Social preferences toward humans and machines: A systematic experiment on the role of machine payoffs. Perspectives on Psychological Science 0, 17456916231194949 (forthcoming). URL https://doi.org/10.1177/17456916231194949. PMID: 37751604. [23] Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  23. Kawaguchi, K. When will workers follow an algorithm? a field experiment with a retail business. Management Science 67, 1670–1695 (2021). [24] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  24. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019). [25] Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Greiner, B., Grunwald, P., Lindner, T., Lintner, G. & Wiernsperger, M. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  25. Incentives, framing, and trust in algorithmic advice: An experimental study (2022). URL https://www.uibk.ac.at/smt/international-management/research/greiner-et-al_algorithmic-advice.pdf. [26] Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Zhou, W., Lin, M., Xiao, M. & Fang, L. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  26. Exploitation and exploration: Improving search precision on e-commerce platforms. Available at SSRN 3762144 (2021). [27] OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  27. OpenAI. Chatgpt jan 30 version (2023). URL https://chat.openai.com. [28] Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Güth, W., Schmittberger, R. & Schwarze, B. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  28. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization 3, 367–388 (1982). [29] Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Berg, J., Dickhaut, J. & McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  29. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142 (1995). [30] Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  30. Lave, L. B. An empirical approach to the Prisoners’ Dilemma Game*. The Quarterly Journal of Economics 76, 424–436 (1962). [31] Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Johnson, N. D. & Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  31. Trust games: A meta-analysis. Journal of Economic Psychology 32, 865–889 (2011). [32] Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Oosterbeek, H., Sloof, R. & van de Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  32. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics 7, 171–188 (2004). [33] Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  33. Mengel, F. Risk and Temptation: A Meta‐study on Prisoner’s Dilemma Games. The Economic Journal 128, 3182–3209 (2017). [34] Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dal Bó, P., Fréchette, G. R. & Kim, J. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  34. The determinants of efficient behavior in coordination games. Games and Economic Behavior 130, 352–368 (2021). [35] Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Dvorak, F., Fischbacher, U., Fehrler, S. & Stumpf, R. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  35. Ai decisions in human interaction. (2023). URL https://osf.io/fvk2c/. [36] Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  36. Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70 (1979). [37] Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  37. Similarweb: Chatgpt usage statistics (retrieved 2023-12-08). URL https://www.similarweb.com/website/chat.openai.com/. [38] Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  38. Nerdynav: Chatgpt usage statistics (retrieved 2023-12-08). URL https://nerdynav.com/chatgpt-statistics/. [39] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  39. Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence 1, 517–521 (2019). URL https://doi.org/10.1038/s42256-019-0113-5. [40] Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  40. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty*. The Economic Journal uead056 (2023). URL https://doi.org/10.1093/ej/uead056. [41] Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Thielmann, I., Böhm, R., Ott, M. & Hilbig, B. E. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  41. Economic Games: An Introduction and Guide for Research. Collabra: Psychology 7, 19004 (2021). [42] Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994). Mehta, J., Starmer, C. & Sugden, R. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
  42. The nature of salience: An experimental investigation of pure coordination games. The American Economic Review 84, 658–673 (1994).
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 6 likes.

Upgrade to Pro to view all of the tweets about this paper: