Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Study of Three Influencer Archetypes for the Control of Opinion Spread in Time-Varying Social Networks (2403.18163v1)

Published 27 Mar 2024 in cs.SI, cs.SY, eess.SY, and physics.soc-ph

Abstract: In this work we consider the impact of information spread in time-varying social networks, where agents request to follow other agents with aligned opinions while dropping ties to neighbors whose posts are too dissimilar to their own views. Opinion control and rhetorical influence has a very long history, employing various methods including education, persuasion, propaganda, marketing, and manipulation through mis-, dis-, and mal-information. The automation of opinion controllers, however, has only recently become easily deployable at a wide scale, with the advent of LLMs and generative AI that can translate the quantified commands from opinion controllers into actual content with the appropriate nuance. Automated agents in social networks can be deployed for various purposes, such as breaking up echo chambers, bridging valuable new connections between agents, or shaping the opinions of a target population -- and all of these raise important ethical concerns that deserve serious attention and thoughtful discussion and debate. This paper attempts to contribute to this discussion by considering three archetypal influencing styles observed by human drivers in these settings, comparing and contrasting the impact of these different control methods on the opinions of agents in the network. We will demonstrate the efficacy of current generative AI for generating nuanced content consistent with the command signal from automatic opinion controllers like these, and we will report on frameworks for approaching the relevant ethical considerations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. C. Rapp and Aristotle, “Aristotle’s rhetoric i.2,” Stanford Encyclopedia of Philosophy (Winter 2023 Edition), 2023, accessed: 3/18/2024. [Online]. Available: https://plato.stanford.edu/entries/aristotle-rhetoric/
  2. J. Scott, “Trend report social network analysis,” Sociology, pp. 109–127, 1988.
  3. S. H. Strogatz, “Exploring complex networks,” Nature, vol. 410, no. 6825, pp. 268–276, 2001.
  4. A. V. Proskurnikov and R. Tempo, “A tutorial on modeling and analysis of dynamic social networks. Part I,” Annual Reviews in Control, vol. 43, pp. 65–79, 2017.
  5. ——, “A tutorial on modeling and analysis of dynamic social networks. Part II,” Annual Reviews in Control, vol. 45, pp. 166–190, 2018.
  6. M. O. Jackson and L. Yariv, “Diffusion on social networks,” Economie Publique (Public Economics), vol. 16, no. 1, pp. 3–16, 2005.
  7. C. Lagnier, L. Denoyer, E. Gaussier, and P. Gallinari, “Predicting information diffusion in social networks using content and user’s profiles,” HAL Open Science, vol. 7814, pp. 74–85, 03 2013.
  8. Y. Jiang and J. Jiang, “Diffusion in social networks: A multiagent perspective,” ”IEEE Trans. Syst., Man, Cybern. A”, vol. 45, no. 2, pp. 198–213, 2014.
  9. Y. Wang, A. V. Vasilakos, J. Ma, and N. Xiong, “On studying the impact of uncertainty on behavior diffusion in social networks,” ”IEEE Trans. Syst., Man, Cybern. A”, vol. 45, no. 2, pp. 185–197, 2014.
  10. G. D’Agostino, F. D’Antonio, A. De Nicola, and S. Tucci, “Interests diffusion in social networks,” Physica A: Statistical Mechanics and its Applications, vol. 436, pp. 443–461, 2015.
  11. W. Xuan, R. Ren, P. E. Paré, M. Ye, S. Ruf, and J. Liu, “On a network sis model with opinion dynamics,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 2582–2587, 2020.
  12. B. She, J. Liu, S. Sundaram, and P. E. Paré, “On a networked sis epidemic model with cooperative and antagonistic opinion dynamics,” IEEE Trans. Cont. Network Syst., vol. 9, no. 3, pp. 1154–1165, 2022.
  13. M. Taylor, “Towards a mathematical theory of influence and attitude change,” Human Relations, vol. 21, no. 2, pp. 121–139, 1968.
  14. M. H. DeGroot, “Reaching a consensus,” Journal of the American Statistical association, vol. 69, no. 345, pp. 118–121, 1974.
  15. N. E. Friedkin and E. C. Johnsen, “Social influence and opinions,” Journal of Mathematical Sociology, vol. 15, no. 3-4, pp. 193–206, 1990.
  16. G. Deffuant, D. Neau, F. Amblard, and G. Weisbuch, “Mixing beliefs among interacting agents,” Advances in Complex Systems, vol. 3, no. 01n04, pp. 87–98, 2000.
  17. R. Hegselmann, U. Krause, et al., “Opinion dynamics and bounded confidence models, analysis, and simulation,” Journal of artificial societies and social simulation, vol. 5, no. 3, 2002.
  18. R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Trans. Automat. Contr., vol. 49, no. 9, pp. 1520–1533, 2004.
  19. R. Olfati-Saber, “Flocking for multi-agent dynamic systems: Algorithms and theory,” IEEE Trans. Automat. Contr., vol. 51, no. 3, pp. 401–420, 2006.
  20. F. Cucker and S. Smale, “Emergent behavior in flocks,” IEEE Trans. Automat. Contr., vol. 52, no. 5, pp. 852–862, 2007.
  21. H. G. Tanner, A. Jadbabaie, and G. J. Pappas, “Flocking in fixed and switching networks,” IEEE Trans. Automat. Contr., vol. 52, no. 5, pp. 863–868, 2007.
  22. R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, 2007.
  23. E. Yildiz, A. Ozdaglar, D. Acemoglu, A. Saberi, and A. Scaglione, “Binary opinion dynamics with stubborn agents,” ACM Transactions on Economics and Computation (TEAC), vol. 1, no. 4, pp. 1–30, 2013.
  24. R. Bredereck and E. Elkind, “Manipulating opinion diffusion in social networks,” in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 2017, pp. 894–900. [Online]. Available: https://doi.org/10.24963/ijcai.2017/124
  25. H. Hu, “Competing opinion diffusion on social networks,” Royal Society Open Science, vol. 4, no. 11, p. 171160, 2017.
  26. D. S. Hunter and T. Zaman, “Optimizing opinions with stubborn agents under time-varying dynamics,” arXiv preprint arXiv:1806.11253, 2018.
  27. H. Z. Brooks and M. A. Porter, “A model for the influence of media on the ideology of content in online social networks,” Physical Review Research, vol. 2, no. 2, pp. 023 041–1–023 041–20, 2020.
  28. N. Wendt, C. Dhal, and S. Roy, “Control of network opinion dynamics by a selfish agent with limited visibility,” IFAC-PapersOnLine, vol. 52, no. 3, pp. 37–42, 2019.
  29. OpenAI, “Gpt-4 technical report,” ArXiv, vol. abs/2303.08774, 2023. [Online]. Available: https://arxiv.org/abs/2303.08774
  30. Y. Liu, K. Zhang, Y. Li, Z. Yan, C. Gao, R. Chen, Z. Yuan, Y. Huang, H. Sun, J. Gao, et al., “Sora: A review on background, technology, limitations, and opportunities of large vision models,” arXiv preprint arXiv:2402.17177, 2024.
  31. M. DeBuse and S. Warnick, “Automatic control of opinion dynamics in social networks,” 7th IEEE Conference on Control Technology and Applications (CCTA), 2023.
  32. M. Mosleh and D. G. Rand, “Measuring exposure to misinformation from political elites on twitter,” Nature Communications, vol. 13, no. 1, p. 7144, 2022.
  33. A. Bovet and H. A. Makse, “Influence of fake news in twitter during the 2016 us presidential election,” Nature Communications, vol. 10, no. 1, p. 7, 2019.
  34. M. Bailey, D. Dittrich, E. Kenneally, and D. Maughan, “The menlo report,” IEEE Security & Privacy, vol. 10, no. 2, pp. 71–75, 2012.
  35. K. Macnish and J. Van der Ham, “Ethics in cybersecurity research and practice,” Technology in Society, vol. 63, p. 101382, 2020.
  36. US Public Policy Council, “Statement on algorithmic transparency and accountability,” Commun. ACM, 2022.
  37. National Institute of Standards and Technology (NIST), “Trustworthy and responsible AI,” accessed: 3/18/2024. [Online]. Available: https://www.nist.gov/trustworthy-and-responsible-ai
  38. INTEL, “Artificial intelligence ethics framework for the intelligence community,” 2020, accessed: 3/18/2024. [Online]. Available: https://www.intelligence.gov/artificial-intelligence-ethics-framework-for-the-intelligence-community
  39. National Science Foundation, “Future of work at the human-technology frontier: Core research (FW-HTF),” National Science Foundation, Arlington, VA, NSF Solicitation 23-543, 2023, available at URL https://new.nsf.gov/funding/opportunities/future-work-human-technology-frontier-core/nsf23-543/solicitation.
  40. R. Waltzman, “The weaponization of information,” RAND, vol. 10, pp. 11–18, 2017. [Online]. Available: https://www.rand.org/ content/dam/rand/pubs/testimonies/CT400/CT473/RAND_CT473.pdf
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Michael DeBuse (1 paper)
  2. Sean Warnick (9 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com