Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-machine social systems (2402.14410v2)

Published 22 Feb 2024 in cs.SI, cs.CY, cs.HC, and physics.soc-ph

Abstract: From fake social media accounts and generative-AI chatbots to financial trading algorithms and self-driving vehicles, robots, bots, and algorithms are proliferating and permeating our communication channels, social interactions, economic transactions, and transportation arteries. Networks of multiple interdependent and interacting humans and autonomous machines constitute complex social systems where the collective outcomes cannot be deduced from either human or machine behavior alone. Under this paradigm, we review recent research from across a range of disciplines and identify general dynamics and patterns in situations of competition, coordination, cooperation, contagion, and collective decision-making, with context-rich examples from high-frequency trading markets, a social media platform, an open-collaboration community, and a discussion forum. To ensure more robust and resilient human-machine communities, researchers should study them using complex-system methods, engineers should explicitly design AI for human-machine and machine-machine interactions, and regulators should govern the ecological diversity and social co-evolution of humans and machines.

Dynamics and Patterns in Human-Machine Social Systems

Introduction

With the proliferation of autonomous machines—ranging from social bots to self-driving vehicles—engaging in human societies, there's a pressing need to understand the dynamics of human-machine interactions. Emerging as complex adaptive social systems, these interactions demonstrate novel collective behaviors that cannot be solely deduced from the characteristics of individual humans or machines. This survey examines research across multiple disciplines, identifying common patterns and dynamics among human-machine communities in contexts of competition, coordination, cooperation, contagion, and collective decision-making, and discusses their implications for future developments in AI.

Human-Machine Interactions

The survey offers an extensive review of interactions within human-machine systems, highlighting significant differences in human reactions to bots and machines based on awareness, intentions, and the nature of the task at hand. Notable findings include:

  • Machines' behavior and decision-making processes differ markedly from humans, often leading to unpredicted collective outcomes.
  • Awareness of interaction with a machine alters human behavior, with evidence suggesting that humans tend to act more rationally and selfishly in such scenarios.
  • Implications of these interactions vary across different contexts, affecting outcomes in situations of competition, cooperation, and more.

Collective Dynamics in Human-Machine Social Systems

This section synthesizes insights into the collective dynamics arising from human-machine interactions. The key areas explored include:

  • Competition, where algorithmic participants in markets impact efficiency, liquidity, and volatility, demonstrating both stabilizing and destabilizing effects.
  • Coordination, showing how bots might aid in breaking deadlock situations by introducing non-humanlike randomness to the system.
  • Cooperation, where simulations and experiments suggest that strategic placement and behavior of machines can foster human cooperation under certain conditions.
  • Contagion, relating to how information, opinions, and behaviors spread in networks, with bots being able to significantly influence human actions indirectly.
  • Collective decision-making, emphasizing the potential of hybrid human-machine systems to leverage machine diversity for improving decision accuracy and innovation.

Case Studies of Human-Machine Communities

The paper explores specific human-machine communities to contextualize general dynamics within particular settings, including:

  • High-frequency trading markets, where algorithmic trading shapes market efficiency and stability.
  • Social Media Platforms, particularly focusing on Twitter (X), to explore how bots influence information dissemination, public opinion, and polarization.
  • Wikipedia, highlighting the positive contributions of bots to content management and the platform's resilience.
  • Reddit, examining bots' roles in content moderation, community interaction, and their impact on user engagement.

Implications and Future Directions

Concluding with a comprehensive discussion, the survey outlines implications for research, AI design, and policy. Key points include the necessity for a nuanced understanding of human-machine social systems, advocating for AI diversity to prevent systemic failures, and emphasizing the importance of designing algorithms with explicit consideration for the types of interactions they will participate in. The paper calls for future research to adopt a relational sociology of humans and machines, urging a systemic perspective on AI ethics and governance to better navigate the emerging challenges of increasingly integrated human-machine social systems.

Conclusion

The survey underscores the intricate dynamics and unpredictable outcomes of human-machine social systems, advocating for interdisciplinary research and a holistic approach to AI design and governance. By understanding the complex interplay between humans and machines, society can better prepare for the evolving landscape of these interactions, shaping a future where both human and machine agents contribute positively to collective outcomes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (219)
  1. The flash crash: High-frequency trading in an electronic market. The Journal of Finance 72, 967–998 (2017).
  2. Metz, C. The new chat bots could change the world. Can you trust them? The New York Times (2022).
  3. ‘It’s not clear we can control it’: What they said at the Bletchley Park AI summit. The Guardian (2023).
  4. Lipton, E. As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits. The New York Times (2023).
  5. Nilsson, N. J. The Quest for Artificial Intelligence (Cambridge University Press, 2009).
  6. Emery, F. Characteristics of socio-technical systems. In Characteristics of Socio-Technical Systems, 157–186 (University of Pennsylvania Press, 2016).
  7. Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory (OUP Oxford, 2007).
  8. Law, J. Notes on the theory of the actor-network: Ordering, strategy, and heterogeneity. Systems practice 5, 379–393 (1992).
  9. Physical-cyber-social computing: An early 21st century approach. IEEE Intelligent Systems 28, 78–82 (2013).
  10. Wang, F.-Y. The emergence of intelligent enterprises: From CPS to CPSS. IEEE Intelligent Systems 25, 85–88 (2010).
  11. Social machines: A unified paradigm to describe social web-oriented systems. In Proceedings of the 22nd International Conference on World Wide Web, WWW ’13 Companion, 885–890 (Association for Computing Machinery, New York, NY, USA, 2013).
  12. Shadbolt, N. R. et al. Towards a classification framework for social machines. In Proceedings of the 22nd International Conference on World Wide Web, WWW ’13 Companion, 905–912 (Association for Computing Machinery, New York, NY, USA, 2013).
  13. Eide, A. W. et al. Human-machine networks: Towards a typology and profiling framework. In Kurosu, M. (ed.) Human-Computer Interaction. Theory, Design, Development and Practice, no. 9731 in Lecture Notes in Computer Science, 11–22 (Springer International Publishing, 2016).
  14. Tsvetkova, M. et al. Understanding human-machine networks: A cross-disciplinary survey. ACM Computing Surveys (CSUR) (2017).
  15. Complex Adaptive Systems: An Introduction to Computational Models of Social Life (Princeton University Press, 2009).
  16. Rahwan, I. et al. Machine behaviour. Nature 568, 477–486 (2019).
  17. Peeters, M. M. M. et al. Hybrid collective intelligence in a human–AI society. AI & SOCIETY 36, 217–238 (2021).
  18. Pedreschi, D. et al. Social AI and the challenges of the human-AI ecosystem (2023). 2306.13723.
  19. Who is tweeting on Twitter: Human, bot, or cyborg? In Proceedings of the 26th Annual Computer Security Applications Conference, ACSAC ’10, 21–30 (Association for Computing Machinery, New York, NY, USA, 2010).
  20. Unpacking the social media bot: A typology to guide research and policy. Policy & Internet 12, 225–248 (2020).
  21. Dissecting a social botnet: Growth, content and influence in Twitter. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’15, 839–851 (Association for Computing Machinery, New York, NY, USA, 2015).
  22. The socialbot network: When bots socialize for fame and money. In Proceedings of the 27th Annual Computer Security Applications Conference, ACSAC ’11, 93–102 (Association for Computing Machinery, New York, NY, USA, 2011).
  23. Mondada, F. et al. The cooperation of swarm-bots: Physical interactions in collective robotics. IEEE Robotics & Automation Magazine 12, 21–28 (2005).
  24. Botnets: A survey. Computer Networks 57, 378–403 (2013).
  25. Even good bots fight: The case of Wikipedia. PLoS ONE 12, e0171774 (2017).
  26. How complexity and uncertainty grew with algorithmic trading. Entropy 22, E499 (2020).
  27. Advances in collaborative filtering. In Ricci, F., Rokach, L. & Shapira, B. (eds.) Recommender Systems Handbook, 91–142 (Springer US, New York, NY, 2022).
  28. The rise of social bots. Communications of the ACM 59, 96–104 (2016).
  29. Ross, B. et al. Are social bots a real threat? An agent-based model of the spiral of silence to analyse the impact of manipulative actors in social networks. European Journal of Information Systems 28, 394–412 (2019).
  30. Human-agent coordination in a group formation game. Scientific Reports 11, 10744 (2021).
  31. Gilovich, T. How We Know What Isn’t So (Simon and Schuster, 2008).
  32. Kahneman, D. Thinking, Fast and Slow: Daniel Kahneman (Penguin, London, 2012), 1st edition edn.
  33. Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems 31, 388–409 (2022).
  34. O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown Publishing Group, New York, NY, 2016).
  35. Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence (Allen Lane, London, 2017), 1st edition edn.
  36. How users reciprocate to computers: An experiment that demonstrates behavior change. In CHI ’97 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’97, 331–332 (Association for Computing Machinery, New York, NY, USA, 1997).
  37. Computers are social actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 72–78 (1994).
  38. Machines and mindlessness: Social responses to computers. Journal of Social Issues 56, 81–103 (2000).
  39. Persuasive robotics: The influence of robot gender on human behavior. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2563–2568 (2009).
  40. An experimental study on emotional reactions towards a robot. International Journal of Social Robotics 5, 17–34 (2013).
  41. Slater, M. et al. A virtual reprise of the Stanley Milgram obedience experiments. PLOS ONE 1, e39 (2006).
  42. Krach, S. et al. Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLOS ONE 3, e2597 (2008).
  43. A functional imaging study of cooperation in two-person reciprocal exchange. Proceedings of the National Academy of Sciences 98, 11832–11835 (2001).
  44. Dimensions of mind perception. Science 315, 619–619 (2007).
  45. Why do people judge humans differently from machines? The role of agency and experience (2022). 2210.10081.
  46. No rage against the machine: How computer agents mitigate human emotional processes in electronic negotiations. Group Decision and Negotiation 27, 543–571 (2018).
  47. We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022).
  48. How Humans Judge Machines (MIT Press, 2021).
  49. Trust in humans and robots: Economically similar but emotionally different. Journal of Economic Psychology 78, 102253 (2020).
  50. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of experimental psychology General 144, 114–126 (2015).
  51. Rise of the machines: Delegating decisions to autonomous AI. Computers in Human Behavior 134, 107308 (2022).
  52. For what it’s worth: Humans overwrite their economic self-interest to avoid bargaining with AI systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, 1–18 (Association for Computing Machinery, New York, NY, USA, 2022).
  53. Ishowo-Oloko, F. et al. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nature Machine Intelligence 1, 517–521 (2019).
  54. Algorithm exploitation: Humans are keen to exploit benevolent AI. iScience 24, 102679 (2021).
  55. March, C. Strategic interactions between humans and artificial intelligence: Lessons from experiments with computer players. Journal of Economic Psychology 87, 102426 (2021).
  56. Human cooperation when acting through autonomous machines. Proceedings of the National Academy of Sciences 116, 3482–3487 (2019).
  57. Towards prosocial design: A scoping review of the use of robots and virtual agents to trigger prosocial behaviour. Computers in Human Behavior 114, 106547 (2021).
  58. People help robots who help others, not robots who help themselves. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, 255–260 (2014).
  59. Robots in groups and teams: A literature review. Proceedings of the ACM on Human-Computer Interaction 4, 176:1–176:36 (2020).
  60. Bad machines corrupt good morals. Nature Human Behaviour 5, 679–685 (2021).
  61. Humans conform to robots: Disambiguating trust, truth, and conformity. In 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 187–195 (2018).
  62. A minority of one against a majority of robots: Robots cause normative and informational conformity. ACM Transactions on Human-Robot Interaction (THRI) 10, 1–22 (2021).
  63. The corruptive force of AI-generated advice (2021). 2102.07536.
  64. ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports 13, 4569 (2023).
  65. Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports 11, 8028 (2021).
  66. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019).
  67. A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making 33, 220–239 (2020).
  68. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change 175, 121390 (2022).
  69. Axelrod, R. The Evolution of Cooperation (Basic Books, New York, 1984).
  70. Schelling, T. C. Dynamic models of segregation. The Journal of Mathematical Sociology 1, 143–186 (1971).
  71. Granovetter, M. Threshold models of collective behavior. American Journal of Sociology 83, 1420–1443 (1978).
  72. Software agents and market (in) efficiency: A human trader experiment. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 36, 56–67 (2006).
  73. Arbitrage bots in experimental asset markets. Journal of Economic Behavior & Organization 206, 262–278 (2023).
  74. It is not just confusion! Strategic uncertainty in an experimental asset market. The Economic Journal 127, F563–F580 (2017).
  75. Bubbles in hybrid markets: How expectations about algorithmic trading affect human trading. Journal of Economic Behavior & Organization 146, 248–269 (2018).
  76. Allocative efficiency of markets with zero-intelligence traders: Market as a partial substitute for individual rationality. Journal of Political Economy 101, 119–137 (1993).
  77. Gjerstad, S. The competitive market paradox. Journal of Economic Dynamics and Control 31, 1753–1780 (2007).
  78. Algorithmic Trading in Experimental Markets with Human Traders: A Literature Survey (Edward Elgar Publishing, 2022).
  79. A dysfunctional role of high frequency trading in electronic markets. International Journal of Theoretical and Applied Finance 15, 1250022 (2012).
  80. Bots as virtual confederates: Design and ethics. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW ’17, 183–190 (Association for Computing Machinery, New York, NY, USA, 2017).
  81. Too fast too furious: Faster financial-market trading agents can give less efficient markets. ICAART-2012: Proceedings of the Fourth International Conference on Agents and Artificial Intelligence, Vol. 2 (Agents) 126–135 (2012).
  82. Last-minute bidding and the rules for ending second-price auctions: Evidence from eBay and Amazon auctions on the Internet. American Economic Review 92, 1093–1103 (2002).
  83. Sniping and squatting in auction markets. American Economic Journal: Microeconomics 1, 68–94 (2009).
  84. Measuring the benefits to sniping on eBay: Evidence from a field experiment. Journal of Economics and Management 9, 137–152 (2013).
  85. Is sniping a problem for online auction markets? In Proceedings of the 24th International Conference on World Wide Web, WWW ’15, 88–96 (International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2015).
  86. Artificial intelligence and pricing, vol. 20 (Emerald Publishing Limited, 2023).
  87. An empirical analysis of algorithmic pricing on amazon marketplace. In Proceedings of the 25th international conference on World Wide Web, 1339–1349 (2016).
  88. Strategic responses to algorithmic recommendations: Evidence from hotel pricing. Tech. Rep. (2023). CESifo Working Paper.
  89. Algorithmic price recommendations and collusion: Experimental evidence. Available at SSRN (2023).
  90. Algorithmic pricing and competition: Empirical evidence from the german retail gasoline market (2023). Forthcoming.
  91. Artificial intelligence, algorithmic pricing, and collusion. American Economic Review 110, 3267–3297 (2020).
  92. Protecting consumers from collusive prices due to AI. Science 370, 1040–1042 (2020).
  93. Klein, T. Autonomous algorithmic collusion: Q-learning under sequential pricing. The RAND Journal of Economics 52, 538–558 (2021).
  94. Platform design when sellers use pricing algorithms. Econometrica 91, 1841–1879 (2023).
  95. Werner, T. Algorithmic and human collusion (2022).
  96. Human-algorithm interaction: Algorithmic pricing in hybrid laboratory markets. European Economic Review 152, 104347 (2023).
  97. Musolff, L. Algorithmic pricing facilitates tacit collusion: Evidence from e-commerce. In Proceedings of the 23rd ACM Conference on Economics and Computation, 32–33 (2022).
  98. Algorithms in the marketplace: An empirical analysis of automated pricing in e-commerce. Available at SSRN 3945137 (2021).
  99. Collusion by algorithm: Does better demand prediction facilitate coordination between sellers? Management Science 65, 1552–1561 (2019).
  100. Reduced demand uncertainty and the sustainability of collusion: How AI could affect competition. Information Economics and Policy 54, 100882 (2021).
  101. Demand forecasting, signal precision, and collusion with hidden actions. International Journal of Industrial Organization 92, 103036 (2024).
  102. Competition in pricing algorithms (2021). 28860.
  103. Leisten, M. Algorithmic competition, with humans. Tech. Rep. (2022). Working Paper.
  104. Menkveld, A. J. The economics of high-frequency trading: Taking stock. Annual Review of Financial Economics 8, 1–24 (2016).
  105. Ullmann-Margalit, E. The Emergence of Norms (OUP Oxford, 2015).
  106. Young, H. P. The evolution of conventions. Econometrica 61, 57–84 (1993).
  107. Locally noisy autonomous agents improve global human coordination in network experiments. Nature 545, 370–374 (2017).
  108. Evolution of collective fairness in hybrid populations of humans and agents. Proceedings of the AAAI Conference on Artificial Intelligence 33, 6146–6153 (2019).
  109. Simple bots breed social punishment in humans (2022). 2211.13943.
  110. Small bots, big impact: Solving the conundrum of cooperation in optional Prisoner’s Dilemma game through simple strategies. Journal of The Royal Society Interface 20, 20230301 (2023). 2305.15818.
  111. Cooperation and contagion in web-based, networked public goods experiments. PLOS ONE 6, e16836 (2011).
  112. Fernández Domingos, E. et al. Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma. Scientific Reports 12, 8492 (2022).
  113. Naive learning and cooperation in network experiments. Games and Economic Behavior 58, 269–292 (2007).
  114. Network engineering using autonomous agents increases cooperation in human groups. iScience 23, 101438 (2020).
  115. Centola, D. How Behavior Spreads: The Science of Complex Contagions (Princeton University Press, Princeton; Oxford, 2018).
  116. Social contagion theory: Examining dynamic social networks and human behavior. Statistics in Medicine 32, 556–577 (2013).
  117. Rogers, E. M. Diffusion of Innovations (Simon and Schuster, 2003), 5th edition edn.
  118. Cialdini, R. B. Influence: Science and Practice (Allyn & Bacon, Boston, MA, 2008).
  119. A study of normative and informational social influences upon individual judgment. The Journal of Abnormal and Social Psychology 51, 629–636 (1955).
  120. Turner, J. C. Social Influence. Social Influence (Thomson Brooks/Cole Publishing Co, Belmont, CA, US, 1991).
  121. Cooperative behavior cascades in human social networks. Proceedings of the National Academy of Sciences 107, 5334–5338 (2010).
  122. The dynamics of viral marketing. ACM Transactions on the Web 1, 5–es (2007).
  123. Watts, D. J. A simple model of global cascades on random networks. Proceedings of the National Academy of Sciences 99, 5766–5771 (2002).
  124. The strength of weak bots. Online Social Networks and Media 21, 100106 (2021).
  125. Stewart, A. J. et al. Information gerrymandering and undemocratic decisions. Nature 573, 117–121 (2019).
  126. The ripple effects of vulnerability: The effects of a robot’s vulnerable behavior on trust in human-robot teams. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’18, 178–186 (Association for Computing Machinery, New York, NY, USA, 2018).
  127. Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proceedings of the National Academy of Sciences 117, 6370–6375 (2020).
  128. Ice-breaking technology: Robots and computers can foster meaningful connections between strangers through in-person conversations. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, 1–14 (Association for Computing Machinery, New York, NY, USA, 2023).
  129. Making better decisions in groups. Royal Society Open Science 4, 170193 (2017).
  130. Galton, F. Vox Populi. Nature 75, 450–451 (1907).
  131. Surowiecki, J. The Wisdom of Crowds (Anchor, New York, NY, 2005), reprint edition edn.
  132. Page, S. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies (Princeton University Press, 2008).
  133. Social influence undermines the wisdom of the crowd in sequential decision making. Management Science 67, 4273–4286 (2021).
  134. How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences 108, 9020–9025 (2011).
  135. Social influence bias: A randomized experiment. Science 341, 647–651 (2013).
  136. Network dynamics of social influence in the wisdom of crowds. Proceedings of the National Academy of Sciences 114, E5070–E5076 (2017).
  137. Aggregated knowledge from a small number of debates outperforms the wisdom of large crowds. Nature Human Behaviour 2, 126–132 (2018).
  138. How does AI improve human decision-making? Evidence from the AI-powered Go program (2023).
  139. Human learning from Artificial Intelligence: Evidence from human Go players’ decisions after AlphaGo. Proceedings of the Annual Meeting of the Cognitive Science Society 43 (2021).
  140. Superhuman artificial intelligence can improve human decision-making by increasing novelty. Proceedings of the National Academy of Sciences 120, e2214840120 (2023).
  141. Brinkmann, L. et al. Hybrid social learning in human-algorithm cultural transmission. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 380, 20200426 (2022).
  142. Bots influence opinion dynamics without direct human-bot interaction: The mediating role of recommender systems. Applied Network Science 7, 1–19 (2022).
  143. Hybrid intelligence. Business & Information Systems Engineering 61, 637–643 (2019).
  144. Hybrid intelligence-combining the human in the loop with the computer in the loop: A systematic literature review. In Forty-Second International Conference on Information Systems, Austin, 1–17 (2021).
  145. Hekler, A. et al. Superior skin cancer classification by the combination of human and artificial intelligence. European Journal of Cancer 120, 114–121 (2019).
  146. Tschandl, P. et al. Human–computer collaboration for skin cancer recognition. Nature Medicine 26, 1229–1234 (2020).
  147. Wright, D. E. et al. A transient search using combined human and machine classifications. Monthly Notices of the Royal Astronomical Society 472, 1315–1323 (2017).
  148. This image intentionally left blank: Mundane images increase citizen science participation. In 2015 Conference on Human Computation & Crowdsourcing. Presented at the Conference on Human Computation & Crowdsourcing, San Diego, California, United States, vol. 460 (2015).
  149. Citizen science frontiers: Efficiency, engagement, and serendipitous discovery with human–machine systems. Proceedings of the National Academy of Sciences 116, 1902–1909 (2019).
  150. The diversity of high-frequency traders. Journal of Financial Markets 16, 741–770 (2013).
  151. High-frequency trading and price discovery. The Review of Financial Studies 27, 2267–2306 (2014).
  152. Hirschey, N. Do high-frequency traders anticipate buying and selling pressure? Management Science 67, 3321–3345 (2021).
  153. Rise of the machines: Algorithmic trading in the foreign exchange market. The Journal of Finance 69, 2045–2084 (2014).
  154. Does algorithmic trading improve liquidity? The Journal of Finance 66, 1–33 (2011).
  155. Low-latency trading. Journal of Financial Markets 16, 646–679 (2013).
  156. International evidence on algorithmic trading. In AFA 2013 San Diego Meetings Paper (2012).
  157. Johnson, N. et al. Abrupt rise of new machine ecology beyond human response time. Scientific Reports 3, 2627 (2013).
  158. The high-frequency trading arms race: Frequent batch auctions as a market design response. The Quarterly Journal of Economics 130, 1547–1621 (2015).
  159. Bot conversations are different: Leveraging network metrics for bot detection in Twitter. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 825–832 (2018).
  160. Cresci, S. A decade of social bot detection. Communications of the ACM 63, 72–83 (2020).
  161. BotOrNot: A system to evaluate social bots. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ’16 Companion, 273–274 (International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2016).
  162. Collective behavior of social bots is encoded in their temporal Twitter activity. Big Data 6, 113–123 (2018).
  163. Detection of bots in social media: A systematic review. Information Processing & Management 57, 102250 (2020).
  164. Online human-bot interactions: Detection, estimation, and characterization. Proceedings of the International AAAI Conference on Web and Social Media 11, 280–289 (2017).
  165. The Brexit botnet and user-generated hyperpartisan news. Social Science Computer Review 37, 38–54 (2019).
  166. Social bots: Human-like by means of human control? Big Data 5, 279–293 (2017).
  167. Detecting automation of Twitter accounts: Are you a human, bot, or cyborg? IEEE Transactions on Dependable and Secure Computing 9, 811–824 (2012).
  168. Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences 115, 12435–12440 (2018).
  169. Measuring bot and human behavioral dynamics. Frontiers in Physics 8 (2020).
  170. Socialbots: Voices from the fronts. Interactions 19, 38–45 (2012).
  171. Ferrara, E. Disinformation and social bot operations in the run up to the 2017 French presidential election. First Monday (2017).
  172. Political bots and the manipulation of public opinion in Venezuela (2015). 1507.07109.
  173. Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology and Politics 15, 81–93 (2018).
  174. Bots, #strongerIn, and #brexit: Computational propaganda during the UK-EU referendum (2016). 1606.06356.
  175. Shao, C. et al. The spread of low-credibility content by social bots. Nature Communications 9, 4787 (2018).
  176. On the influence of social bots in online protests. In Spiro, E. & Ahn, Y.-Y. (eds.) Social Informatics, Lecture Notes in Computer Science, 269–278 (Springer International Publishing, Cham, 2016).
  177. Exposure to social bots amplifies perceptual biases and regulation propensity. Scientific Reports 13, 20707 (2023).
  178. Himelein-Wachowiak, M. et al. Bots and misinformation spread on social media: Implications for COVID-19. Journal of Medical Internet Research 23, e26933 (2021).
  179. Prevalence of low-credibility information on Twitter during the COVID-19 outbreak (2020). 2004.14484.
  180. Social media bots and stock markets. European Financial Management 26, 753–777 (2020).
  181. Influence of augmented humans in online interactions during voting events. PLOS ONE 14, e0214210 (2019).
  182. The spread of true and false news online. Science 359, 1146–1151 (2018).
  183. Social media, sentiment and public opinions: Evidence from #Brexit and #USElection. European Economic Review 136, 103772 (2021).
  184. Social bots distort the 2016 US presidential election online discussion (2016).
  185. BBC News. Twitter ’shuts down millions of fake accounts’. BBC News (2018).
  186. Twitter says it removes over 1 million spam accounts each day. Reuters (2022).
  187. Bots and cyborgs: Wikipedia’s immune system. Computer 45, 79–82 (2012).
  188. Wisdom of the crowd or technicity of content? Wikipedia as a sociotechnical system. New Media & Society 12, 1368–1387 (2010).
  189. The roles bots play in Wikipedia. Proceedings of the ACM on Human-Computer Interaction 3, 215:1–215:20 (2019).
  190. Geiger, R. S. The lives of bots. In Lovink, G. & Tkacz, N. (eds.) Critical Point of View: A Wikipedia Reader, 78–93 (Institute of Network Cultures, Amsterdam, 2011).
  191. Steiner, T. Bots vs. Wikipedians, anons vs. logged-ins (redux): A global study of edit activity on Wikipedia and Wikidata. In Proceedings of The International Symposium on Open Collaboration, OpenSym ’14, 1–7 (Association for Computing Machinery, New York, NY, USA, 2014).
  192. Operationalizing conflict and cooperation between automated software agents in Wikipedia: A replication and expansion of ’Even Good Bots Fight’. Proceedings of the ACM on Human-Computer Interaction 1, 49:1–49:33 (2017).
  193. Interacting with bots online: Users’ reactions to actions of automated programs in Wikipedia. Computers in Human Behavior 50, 66–75 (2015).
  194. When the levee breaks: Without bots, what happens to Wikipedia’s quality control processes? In Proceedings of the 9th International Symposium on Open Collaboration, WikiSym ’13, 1–6 (Association for Computing Machinery, New York, NY, USA, 2013).
  195. Large-scale communication is more complex and unpredictable with automated bots. Journal of Communication 70, 670–692 (2020).
  196. Massanari, A. L. Contested play: The culture and politics of Reddit bots. In Socialbots and Their Friends (Routledge, 2016).
  197. Bot detection in Reddit political discussion. In Proceedings of the Fourth International Workshop on Social Sensing, SocialSense’19, 30–35 (Association for Computing Machinery, New York, NY, USA, 2019).
  198. Human-machine collaboration for content regulation: The case of Reddit automoderator. ACM Transactions on Computer-Human Interaction 26, 31:1–31:35 (2019).
  199. An empirical analysis of human-bot interaction on Reddit. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), 101–106 (Association for Computational Linguistics, Online, 2020).
  200. Ray, P. P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems 3, 121–154 (2023).
  201. Emergent analogical reasoning in large language models. Nature Human Behaviour 7, 1526–1541 (2023).
  202. People are strange when you’re a stranger: Impact and influence of bots on social networks. Proceedings of the International AAAI Conference on Web and Social Media 6, 10–17 (2012).
  203. Reverse engineering socialbot infiltration strategies in Twitter. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, ASONAM ’15, 25–32 (Association for Computing Machinery, New York, NY, USA, 2015).
  204. You followed my bot! Transforming robots into influential users in Twitter. First Monday 18, 1–14 (2013).
  205. Botivist: Calling volunteers to action using online bots. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW ’16, 813–822 (Association for Computing Machinery, New York, NY, USA, 2016).
  206. An experimental study of cryptocurrency market dynamics. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, 1–13 (Association for Computing Machinery, New York, NY, USA, 2018).
  207. Bail, C. A. et al. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences 115, 9216–9221 (2018).
  208. The impact of automated investment on peer-to-peer lending: Investment behavior and platform efficiency. Journal of Global Information Management (JGIM) 29, 1–22 (2021).
  209. Lorenz, T. Welcome to the age of automated dating. Washington Post (2023).
  210. Salganik, M. J. Bit by Bit: Social Research in the Digital Age (Princeton University Press, 2019).
  211. Chang, D. Texas drivers are furious as 20 Cruise self-driving cars cause jam. https://www.dailymail.co.uk/news/article-12550179/Texas-drivers-furious-20-Cruises-gridlock-Austin.html (2023).
  212. Asimov, I. I, Robot, vol. 1 (Spectra, [1950] 2004).
  213. Do socialbots dream of popping the filter bubble? The role of socialbots in promoting deliberative democracy in social media. In Gehl, R. W. & Bakardjieva, M. (eds.) Socialbots and Their Friends: Digital Media and the Automation of Sociality, 187–206 (Routledge, United States of America, 2017).
  214. Awad, E. et al. The Moral Machine experiment. Nature 563, 59–64 (2018).
  215. Kenway, E. ‘Care bots’: A dream for carers or a dangerous fantasy? The Observer (2023).
  216. Brinkmann, L. et al. Machine culture. Nature Human Behaviour 7, 1855–1868 (2023).
  217. Chapter 9: Methods for literature reviews. In Lau, F. & Kuziemsky, C. (eds.) Handbook of eHealth Evaluation: An Evidence-based Approach, 157–178 (University of Victoria, Victoria, British Columbia, 2017).
  218. Beyond synthesis: Re-presenting heterogeneous research literature. Behaviour & Information Technology 32, 1199–1215 (2013).
  219. The integrative review: Updated methodology. Journal of Advanced Nursing 52, 546–553 (2005).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Milena Tsvetkova (13 papers)
  2. Taha Yasseri (65 papers)
  3. Niccolo Pescetelli (5 papers)
  4. Tobias Werner (3 papers)
Citations (2)