Papers
Topics
Authors
Recent
Search
2000 character limit reached

Views on AI aren't binary -- they're plural

Published 21 Dec 2023 in cs.CY | (2312.14230v2)

Abstract: Recent developments in AI have brought broader attention to tensions between two overlapping communities, "AI Ethics" and "AI Safety." In this article we (i) characterize this false binary, (ii) argue that a simple binary is not an accurate model of AI discourse, and (iii) provide concrete suggestions for how individuals can help avoid the emergence of us-vs-them conflict in the broad community of people working on AI development and governance. While we focus on "AI Ethics" an "AI Safety," the general lessons apply to related tensions, including those between accelerationist ("e/acc") and cautious stances on AI development.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (84)
  1. This reference has been withheld.
  2. “AI & Democracy Foundation”, https://ai-dem.org/, 2023
  3. Scott Alexander “Contra Acemoglu On…Oh God, We’re Doing This Again, Aren’t We?” https://www.astralcodexten.com/p/contra-acemoglu-onoh-god-were-doing In Astral Codex Ten, 2021
  4. Scott Alexander “When Does Worrying About Things Trade Off Against Worrying About Other Things?” https://www.astralcodexten.com/p/when-does-worrying-about-things-trade In Astral Codex Ten, 2021
  5. “Aligned: Platform Based Alignment” https://energize.ai/wp-content/uploads/2023/11/openai.pdf, 2023
  6. “Alignment Assemblies” https://cip.org/alignmentassemblies, 2023
  7. Emily M. Bender “Talking about a ‘schism’ is ahistorical” https://medium.com/@emilymenonbender/talking-about-a-schism-is-ahistorical-3c454a77220f, 2023
  8. The Editorial Board “Stop talking about tomorrow’s AI doomsday when AI poses risks today” https://www.nature.com/articles/d41586-023-02094-7 In Nature, 2023
  9. Emily Bobrow “AI Expert Max Tegmark Warns That Humanity Is Failing the New Technology’s Challenge” https://www.wsj.com/tech/ai/ai-expert-max-tegmark-warns-that-humanity-is-failing-the-new-technologys-challenge-4d423bee In The Wall Street Journal, 2023
  10. Brendan Bordelon “How a billionaire-backed network of AI advisers took over Washington” https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362 In Politico, 2023
  11. “AI Poses Doomsday Risks—But That Doesn’t Mean We Shouldn’t Talk About Present Harms Too” https://time.com/6303127/ai-future-danger-present-harms/ In Time, 2023
  12. “Bringing People Together to Inform Decision-Making on Generative AI” https://about.fb.com/news/2023/06/generative-ai-community-forum/, 2023
  13. David C. Brock “Our Censors, Ourselves: Commercial Content Moderation” https://lareviewofbooks.org/article/our-censors-ourselves-commercial-content-moderation/, Permalink: https://archive.is/J6wNI In Los Angeles Review of Books, 2019
  14. Joanna Bryson “OpenAI reveals the anti-regulatory intent of its regulatory disinformation”, https://joanna-bryson.blogspot.com/2023/05/sam-altman-is-speaking-in-munich-today.html, 2023
  15. Steven Byrnes ““X distracts from Y” as a thinly-disguised fight over group status / politics” https://forum.effectivealtruism.org/posts/NfgMAS67nKTGzmQMB/x-distracts-from-y-as-a-thinly-disguised-fight-over-group In Effective Altruism Forum, 2023
  16. Steven Byrnes “Munk AI debate: confusions and possible cruxes” https://www.lesswrong.com/posts/LNwtnZ7MGTmeifkz3/munk-ai-debate-confusions-and-possible-cruxes In LessWrong, 2023
  17. “Case Law for AI Policy” http://social.cs.washington.edu/case-law-ai-policy/, 2023
  18. Quan Ze Chen and Amy X. Zhang “Case Law Grounding: Aligning Judgments of Humans and AI on Socially-Constructed Concepts”, 2023 arXiv:2310.07019 [cs.HC]
  19. Laurie Clarke “How Silicon Valley doomers are shaping Rishi Sunak’s AI plans” https://www.politico.eu/article/rishi-sunak-artificial-intelligence-pivot-safety-summit-united-kingdom-silicon-valley-effective-altruism/ In Politico, 2023
  20. Devin Coldewey “Ethicists fire back at ‘AI Pause’ letter they say ‘ignores the actual harms”’ https://techcrunch.com/2023/03/31/ethicists-fire-back-at-ai-pause-letter-they-say-ignores-the-actual-harms/ In TechCrunch, 2023
  21. “Collective Constitutional AI: Aligning a Language Model with Public Input” https://www.anthropic.com/index/collective-constitutional-ai-aligning-a-language-model-with-public-input, 2023
  22. Ajeya Cotra “Alignment researchers disagree a lot” https://www.planned-obsolescence.org/disagreement-in-alignment/ In Planned Obsolescence, 2023
  23. Tyler Cowen “What the Breathless AI ‘Extinction’ Warning Gets Wrong” https://www.bloomberg.com/opinion/articles/2023-06-02/is-ai-an-existential-risk-latest-warning-may-do-more-harm-than-good In Bloomberg, 2023
  24. Nello Cristianini “If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen” https://theconversation.com/if-were-going-to-label-ai-an-extinction-risk-we-need-to-clarify-how-it-could-happen-206738 In The Conversation, 2023
  25. John Davidson “Google Brain founder says big tech is lying about AI extinction danger” https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz In Australian Financial Review, 2023
  26. “Deliberation at Scale: Socially Democratic Inputs to AI” https://docs.google.com/document/d/1-2t3mFHzT3cJ7rKOw-dYX5pqoN-RxaJLl7bdXluh6y8/edit?usp=sharing, 2023
  27. “Democratic inputs to AI” https://openai.com/blog/democratic-inputs-to-ai, 2023
  28. “US public perception of CAIS statement and the risk of extinction” https://forum.effectivealtruism.org/posts/Rg7h7G3KTvaYEtL55/us-public-perception-of-cais-statement-and-the-risk-of In Effective Altruism Forum, 2023
  29. “Existing Policy Proposals Targeting Present and Future Harms” https://www.safe.ai/post/three-policy-proposals-for-ai-safety, 2023
  30. “Case Repositories: Towards Case-Based Reasoning for AI Alignment”, 2023 arXiv:2311.10934 [cs.AI]
  31. Morris P Fiorina “Unstable Majorities: Polarization, Party Sorting, and Political Stalemate” Hoover Press, 2017
  32. “Generative Social Choice”, 2023 arXiv:2309.01291 [cs.GT]
  33. Timnit Gebru and Émile P. Torres “Eugenics and the Promise of Utopia through Artificial General Intelligence” Recorded talk, https://youtu.be/P7XT4TWLzJw?si=noZ8LZgIKjy24-S2, 2023
  34. “Statement from the listed authors of Stochastic Parrots on the “AI pause”’ letter” https://www.dair-institute.org/blog/letter-statement-March2023/, 2023
  35. Edward Glaeser and Cass R. Sunstein “Does More Speech Correct Falsehoods?” In The Journal of Legal Studies 43.1, 2014, pp. 65–93 DOI: 10.1086/675247
  36. Seán Ó hÉigeartaigh “A note of caution about recent AI risk coverage” https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage In Effective Altruism Forum, 2023
  37. Emmie Hine “ER 19: On AI, extinction, and the existence spectrum” https://ethicalreckoner.substack.com/p/er-19-on-ai-extinction-and-the-existence In The Ethical Reckoner, 2023
  38. “How to reduce AI harms and future risks at once” https://www.simoninstitute.ch/blog/post/how-to-reduce-ai-harms-and-future-risks-at-once/, 2023
  39. “Introducing Democratic Fine-Tuning” https://meaningalignment.substack.com/p/introducing-democratic-fine-tuning, 2023
  40. “A misleading open letter about sci-fi AI dangers ignores the real risks” https://www.aisnakeoil.com/p/a-misleading-open-letter-about-sci In AI Snake Oil, 2023
  41. “Panic about overhyped AI risk could lead to the wrong kind of regulation” https://www.vox.com/future-perfect/2023/7/3/23779794/artificial-intelligence-regulation-ai-risk-congress-sam-altman-chatgpt-openai In Vox, 2023
  42. “Democratic Policy Development using Collective Dialogues and AI”, 2023 arXiv:2311.02242 [cs.CY]
  43. Seth Lazar, Jeremy Howard and Arvind Narayanan “Is Avoiding Extinction from AI Really an Urgent Priority?” https://www.fast.ai/posts/2023-05-31-extinction.html, 2023
  44. Future Life Institute “Pause Giant AI Experiments: An Open Letter”, https://futureoflife.org/open-letter/pause-giant-ai-experiments/, 2023
  45. Samara Linton “Tech Elite’s AI Ideologies Have Racist Foundations, Say AI Ethicists” https://peopleofcolorintech.com/articles/timnit-gebru-and-emile-torres-call-out-racist-roots-of-the-tech-elites-ai-ideologies/ In People of Color in Tech, 2023
  46. Connie Loizos “1,100+ notable signatories just signed an open letter asking ‘all AI labs to immediately pause for at least 6 months”’ https://techcrunch.com/2023/03/28/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months/ In TechCrunch, 2023
  47. Garrison Lovely “The Socialist Case for Longtermism” https://jacobin.com/2022/09/socialism-longtermism-effective-altruism-climate-ai In Jacobin, 2022
  48. Sasha Luccioni “The Call to Halt ‘Dangerous’ AI Research Ignores a Simple Truth” https://www.wired.com/story/the-call-to-halt-dangerous-ai-research-ignores-a-simple-truth/ In WIRED, 2023
  49. Lao Mein “I Think Eliezer Should Go on Glenn Beck” https://www.lesswrong.com/posts/FGWfTxsXk7euh4QGk/i-think-eliezer-should-go-on-glenn-beck In LessWrong, 2023
  50. Gemma B. Mendoza “Can we use AI to enrich democratic consultations?” https://www.rappler.com/technology/features/generative-ai-use-enriching-democratic-consultations/ In Rappler, 2023
  51. “Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society”’ https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html In The New York Times, 2023
  52. Michael Nielsen “Notes on Existential Risk from Artificial Superintelligence”, https://michaelnotebook.com/xrisk/index.html, 2023
  53. Lorena O’Neil “These Women Tried to Warn Us About AI” https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/ In Rolling Stone, 2023
  54. “OpenAI x DFT: The First Moral Graph” https://meaningalignment.substack.com/p/the-first-moral-graph, 2023
  55. Aviv Ovadya “Concepts & artifacts for AI-augmented democratic innovation: Process Cards, Run Reports, and more” https://reimagine.aviv.me/p/process-cards-and-run-reports In Reimagining Technology, 2023
  56. Aviv Ovadya “Deliberative Polls, Citizen Assemblies, and an Online Deliberation Platform” https://reimagine.aviv.me/p/deliberative-poll-vs-citizen-assembly-meta-pilot In Reimagining Technology, 2023
  57. Aviv Ovadya “Governance of AI, with AI, through deliberative democracy” https://reimagine.aviv.me/p/governance-of-ai-with-ai-through In Reimagining Technology, 2023
  58. Aviv Ovadya “How ‘Platform Democracy’ or ‘AI Democracy’ might interact with existing institutions” https://reimagine.aviv.me/p/how-platform-democracy-or-ai-democracy In Reimagining Technology, 2023
  59. Aviv Ovadya “Meta Ran a Giant Experiment in Governance. Now It’s Turning to AI” https://www.wired.com/story/meta-ran-a-giant-experiment-in-governance-now-its-turning-to-ai/ In WIRED, 2023
  60. Aviv Ovadya “Reimagining Democracy for AI” https://www.journalofdemocracy.org/articles/reimagining-democracy-for-ai/ In Journal of Democracy, 2023
  61. “Participatory AI Risk Prioritization: Alignment Assembly Report” https://cip.org/s/Participatory-AI-Risk-Prioritization-CIP.pdf, 2023
  62. Kari Paul “Letter signed by Elon Musk demanding AI research pause sparks controversy” https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt In The Guardian, 2023
  63. Kelsey Piper “There are two factions working to prevent AI dangers. Here’s why they’re deeply divided.” https://www.vox.com/future-perfect/2022/8/10/23298108/ai-dangers-ethics-alignment-present-future-risk In Vox, 2023
  64. “Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’20 New York, NY, USA: Association for Computing Machinery, 2020, pp. 138–143 DOI: 10.1145/3375627.3375803
  65. “The Illusion Of AI’s Existential Risk” https://www.noemamag.com/the-illusion-of-ais-existential-risk/ In Noēma, 2023
  66. Matt Ridley “The Rational Optimist” Harper, 2010
  67. Amanda Ripley “High Conflict” SimonSchuster, 2022
  68. Henrik Skaug Sætra and John Danaher “Resolving the battle of short- vs. long-term AI risks” https://doi.org/10.1007/s43681-023-00336-y In AI and Ethics, 2023
  69. “How Worried Should We Be About AI’s Threat to Humanity? Even Tech Leaders Can’t Agree” https://www.wsj.com/tech/ai/how-worried-should-we-be-about-ais-threat-to-humanity-even-tech-leaders-cant-agree-46c664b6, Permalink: https://archive.is/ieWiw In The Wall Street Journal, 2023
  70. Bruce Schneier “The A.I. Wars Have Three Factions, and They All Crave Power” https://www.nytimes.com/2023/09/28/opinion/ai-safety-ethics-effective.html In The New York Times, 2023
  71. “Aligned: A Platform-based Process for Alignment”, 2023 arXiv:2311.08706 [cs.CY]
  72. Charlotte Stix and Matthijs M. Maas “Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy” https://doi.org/10.1007/s43681-020-00037-w In AI and Ethics, 2021
  73. tante “PR as open letter”, https://tante.cc/2023/03/29/pr-as-open-letter/, 2023
  74. Nitasha Tiku “How elite schools like Stanford became fixated on the AI apocalypse” https://www.washingtonpost.com/technology/2023/07/05/ai-apocalypse-college-students/ In The Washington Post, 2023
  75. Émile P. Torres “Against longtermism” https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo In Aeon, 2021
  76. Émile P. Torres “Nick Bostrom, Longtermism, and the Eternal Return of Eugenics” https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/ In Truthdig, 2023
  77. Émile P. Torres “The Acronym Behind Our Wildest AI Dreams and Nightmares” https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/ In Truthdig, 2023
  78. Émile P. Torres “The Dangerous Ideas of “Longtermism” and “Existential Risk”” https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk In Current Affairs, 2021
  79. Değer Turan, Colleen McKenzie and Oliver Klingefjord “Mapping the Discourse on AI Safety & Ethics” https://ai.objectives.institute/blog/mapping-the-discourse-on-ai-safety-amp-ethics, 2023
  80. vTaiwan and Chatham House “Recursive Public”, https://www.recursivepublic.com/, 2023
  81. “Why AI Will Save the World” https://a16z.com/ai-will-save-the-world/, 2023
  82. Matteo Wong “AI Doomerism is a Decoy” https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/ In The Atlantic, 2023
  83. “Workshop on Sociotechnical AI Safety” https://hai.stanford.edu/workshop-sociotechnical-ai-safety, 2023
  84. Eliezer Yudkowsky “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down” https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ In Time, 2023

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.