Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
132 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Trust AI Regulation? Discerning users are vital to build trust and effective AI regulation (2403.09510v1)

Published 14 Mar 2024 in cs.AI, cs.CY, cs.GT, cs.MA, and math.DS

Abstract: There is general agreement that some form of regulation is necessary both for AI creators to be incentivised to develop trustworthy systems, and for users to actually trust those systems. But there is much debate about what form these regulations should take and how they should be implemented. Most work in this area has been qualitative, and has not been able to make formal predictions. Here, we propose that evolutionary game theory can be used to quantitatively model the dilemmas faced by users, AI creators, and regulators, and provide insights into the possible effects of different regulatory regimes. We show that creating trustworthy AI and user trust requires regulators to be incentivised to regulate effectively. We demonstrate the effectiveness of two mechanisms that can achieve this. The first is where governments can recognise and reward regulators that do a good job. In that case, if the AI system is not too risky for users then some level of trustworthy development and user trust evolves. We then consider an alternative solution, where users can condition their trust decision on the effectiveness of the regulators. This leads to effective regulation, and consequently the development of trustworthy AI and user trust, provided that the cost of implementing regulations is not too high. Our findings highlight the importance of considering the effect of different regulatory regimes from an evolutionary game theoretic perspective.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. J. Laux, S. Wachter et al., “Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk,” Regulation & Governance, vol. 18, no. 1, pp. 3–32, 2024.
  2. S. T. Powers, O. Linnyk et al., “The Stuff We Swim in: Regulation Alone Will Not Lead to Justifiable Trust in AI,” IEEE Technology and Society Magazine, vol. 42, no. 4, pp. 95–106, 2023.
  3. C. Siegmann and M. Anderljung, “The Brussels Effect and Artificial Intelligence,” Oct. 2022.
  4. T. Baker, “The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap,” Nov. 2023.
  5. J. Tallberg, E. Erman et al., “The global governance of artificial intelligence: Next steps for empirical and normative research,” International Studies Review, vol. 25, no. 3, p. viad040, 2023, private vs public regulation.
  6. J. Clark and G. K. Hadfield, “Regulatory Markets for AI Safety,” arXiv, Dec. 2019.
  7. A. Dafoe, “AI Governance: Overview and Theoretical Lenses,” in The Oxford Handbook of AI Governance, J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, and B. Zhang, Eds.   Oxford University Press, 2023, p. 0.
  8. M. Anderljung, J. Barnhart et al., “Frontier AI Regulation: Managing Emerging Risks to Public Safety,” Jul. 2023.
  9. G. K. Hadfield and J. Clark, “Regulatory Markets: The Future of AI Governance,” Apr. 2023.
  10. J. Alaga and J. Schuett, “Coordinated Pausing: An Evaluation-Based Coordination Scheme for Frontier AI Developers,” Sep. 2023.
  11. J. Pitt, “Chatsh*t and other conversations (that we should be having, but mostly are not),” IEEE Technology and Society Magazine, vol. 42, no. 3, pp. 7–13, 2023.
  12. S. Armstrong, N. Bostrom et al., “ Racing to the Precipice: A Model of Artificial Intelligence Development ,” Ai & Society, vol. 31, no. 2, pp. 201–206, May 2016.
  13. T. LaCroix and A. Mohseni, “The Tragedy of the AI Commons,” Synthese. An International Journal for Epistemology, Methodology and Philosophy of Science, vol. 200, no. 4, p. 289, 2022.
  14. M. Jensen, N. Emery-Xu et al., “ Industrial Policy for Advanced AI: Compute Pricing and the Safety Tax ,” 2023.
  15. T. A. Han, L. M. Pereira et al., “ To Regulate or Not: A Social Dynamics Analysis of an Idealised AI Race ,” Journal of Artificial Intelligence Research, vol. 69, pp. 881–921, Nov. 2020.
  16. T. A. Han, “ Institutional Incentives for the Evolution of Committed Cooperation: Ensuring Participation Is as Important as Enhancing Compliance ,” Journal of The Royal Society Interface, vol. 19, no. 188, p. 20220036, 2022.
  17. T. Cimpeanu, F. Santos et al., “Artificial Intelligence Development Races in Heterogeneous Settings,” Scientific Reports, vol. 12, no. 1, p. 1723, 2022.
  18. P. Bova, A. Di Stefano et al., “ A Tale of Two Regulatory Markets: The Role of Institutional Incentives in Supporting Sustainable Regulatory Markets for Future AI Systems ,” in ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference.   MIT Press, Jul. 2023.
  19. T. A. Han, C. Perret, and S. T. Powers, “When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games,” Cognitive Systems Research, vol. 68, pp. 111–124, 2021.
  20. A. Askell, M. Brundage et al., “The Role of Cooperation in Responsible AI Development,” arXiv, Jul. 2019.
  21. B. Cottier, T. Besiroglu et al., “Who Is Leading in AI? An Analysis of Industry AI Research,” 2024.
  22. T. D. a. C. Grant, “How Microsoft Catapulted to $3 Trillion on the Back of AI,” https://www.wsj.com/tech/microsoft-closes-above-3-trillion-as-its-big-ai-play-generates-excitement-e37ea2a4, 2024.
  23. P. Cihon, M. J. Kleinaltenkamp et al., “ AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries ,” IEEE Transactions on Technology and Society, vol. 2, no. 4, pp. 200–209, Dec. 2021.
  24. L. A. Imhof, D. Fudenberg, and M. A. Nowak, “Evolutionary cycles of cooperation and defection,” Proc. Natl. Acad. Sci. U.S.A., vol. 102, pp. 10 797–10 800, 2005.
  25. M. A. Nowak, A. Sasaki, C. Taylor, and D. Fudenberg, “Emergence of cooperation and evolutionary stability in finite populations,” Nature, vol. 428, pp. 646–650, 2004.
  26. E. F. Domingos, F. C. Santos, and T. Lenaerts, “Egttools: Evolutionary game dynamics in python,” Iscience, vol. 26, no. 4, 2023.
  27. A. Traulsen, M. A. Nowak, and J. M. Pacheco, “Stochastic dynamics of invasion and fixation,” Phys. Rev. E, vol. 74, p. 11909, 2006.
  28. S. Encarnação, F. P. Santos, F. C. Santos, V. Blass, J. M. Pacheco, and J. Portugali, “Paradigm shifts and the interplay between state, business and civil sectors,” Royal Society open science, vol. 3, no. 12, p. 160753, 2016.
  29. Z. Alalawi, T. A. Han, Y. Zeng, and A. Elragig, “Pathways to good healthcare services and patient satisfaction: An evolutionary game theoretical approach,” in Artificial Life Conference Proceedings.   MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info …, 2019, pp. 135–142.
  30. P. D. Taylor, “Evolutionarily stable strategies with two types of player,” Journal of applied probability, vol. 16, no. 1, pp. 76–83, 1979.
  31. J. Bauer, M. Broom, and E. Alonso, “The stabilization of equilibria in evolutionary game dynamics through mutation: mutation limits in evolutionary games,” Proceedings of the Royal Society A, vol. 475, no. 2231, p. 20190355, 2019.
  32. J. Whittlestone and J. Clark, “Why and How Governments Should Monitor AI Development,” arXiv, Aug. 2021.
  33. GOV.UK, “Introducing the AI Safety Institute,” https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute, 2023.
  34. T. Shevlane, S. Farquhar et al., “Model Evaluation for Extreme Risks,” May 2023.
  35. L. Koessler and J. Schuett, “ Risk Assessment at AGI Companies: A Review of Popular Risk Assessment Techniques from Other Safety-Critical Industries ,” Jul. 2023.
  36. Y. Shavit, “ What Does It Take to Catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring ,” Mar. 2023.
  37. G. Sastry, L. Heim, H. Belfield, M. Anderljung, M. Brundage et al., “Computing Power and the Governance of Artificial Intelligence,” 2024.
  38. R. F. Trager, B. Harack et al., “ International Governance of Civilian AI: A Jurisdictional Certification Approach,” 2023. [Online]. Available: https://dx.doi.org/10.2139/ssrn.4579899
  39. L. Ho, J. Barnhart et al., “International Institutions for Advanced AI,” Jul. 2023.
  40. Gov.uk, “ The Bletchley Declaration by Countries Attending the AI Safety Summit ,” https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023, 2023.
  41. G. Hadfield, M.-F. Cuéllar, and T. O’Reilly, “It’s Time to Create a National Registry for Large AI Models,” https://carnegieendowment.org/2023/07/12/it-s-time-to-create-national-registry-for-large-ai-models-pub-90180, 2023.
  42. P. Bova, A. Di Stefano et al., “ Both Eyes Open: Vigilant Incentives Help Regulatory Markets Improve AI Safety ,” Mar. 2023.
  43. J. C. van den Bergh and J. M. Gowdy, “A group selection perspective on economic behavior, institutions and organizations,” Journal of Economic Behavior & Organization, vol. 72, no. 1, pp. 1–20, 2009.
  44. P. Richerson, R. Baldini, A. V. Bell, K. Demps, K. Frost, V. Hillis, S. Mathew, E. K. Newton, N. Naar, L. Newson et al., “Cultural group selection plays an essential role in explaining human cooperation: A sketch of the evidence,” Behavioral and Brain Sciences, vol. 39, p. e30, 2016.
  45. T. A. Han, T. Lenaerts et al., “ Voluntary Safety Commitments Provide an Escape from Over-Regulation in AI Development ,” Technology in Society, vol. 68, p. 101843, 2022.
Citations (7)

Summary

We haven't generated a summary for this paper yet.