Views on AI aren't binary -- they're plural
Abstract: Recent developments in AI have brought broader attention to tensions between two overlapping communities, "AI Ethics" and "AI Safety." In this article we (i) characterize this false binary, (ii) argue that a simple binary is not an accurate model of AI discourse, and (iii) provide concrete suggestions for how individuals can help avoid the emergence of us-vs-them conflict in the broad community of people working on AI development and governance. While we focus on "AI Ethics" an "AI Safety," the general lessons apply to related tensions, including those between accelerationist ("e/acc") and cautious stances on AI development.
- This reference has been withheld.
- “AI & Democracy Foundation”, https://ai-dem.org/, 2023
- Scott Alexander “Contra Acemoglu On…Oh God, We’re Doing This Again, Aren’t We?” https://www.astralcodexten.com/p/contra-acemoglu-onoh-god-were-doing In Astral Codex Ten, 2021
- Scott Alexander “When Does Worrying About Things Trade Off Against Worrying About Other Things?” https://www.astralcodexten.com/p/when-does-worrying-about-things-trade In Astral Codex Ten, 2021
- “Aligned: Platform Based Alignment” https://energize.ai/wp-content/uploads/2023/11/openai.pdf, 2023
- “Alignment Assemblies” https://cip.org/alignmentassemblies, 2023
- Emily M. Bender “Talking about a ‘schism’ is ahistorical” https://medium.com/@emilymenonbender/talking-about-a-schism-is-ahistorical-3c454a77220f, 2023
- The Editorial Board “Stop talking about tomorrow’s AI doomsday when AI poses risks today” https://www.nature.com/articles/d41586-023-02094-7 In Nature, 2023
- Emily Bobrow “AI Expert Max Tegmark Warns That Humanity Is Failing the New Technology’s Challenge” https://www.wsj.com/tech/ai/ai-expert-max-tegmark-warns-that-humanity-is-failing-the-new-technologys-challenge-4d423bee In The Wall Street Journal, 2023
- Brendan Bordelon “How a billionaire-backed network of AI advisers took over Washington” https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362 In Politico, 2023
- “AI Poses Doomsday Risks—But That Doesn’t Mean We Shouldn’t Talk About Present Harms Too” https://time.com/6303127/ai-future-danger-present-harms/ In Time, 2023
- “Bringing People Together to Inform Decision-Making on Generative AI” https://about.fb.com/news/2023/06/generative-ai-community-forum/, 2023
- David C. Brock “Our Censors, Ourselves: Commercial Content Moderation” https://lareviewofbooks.org/article/our-censors-ourselves-commercial-content-moderation/, Permalink: https://archive.is/J6wNI In Los Angeles Review of Books, 2019
- Joanna Bryson “OpenAI reveals the anti-regulatory intent of its regulatory disinformation”, https://joanna-bryson.blogspot.com/2023/05/sam-altman-is-speaking-in-munich-today.html, 2023
- Steven Byrnes ““X distracts from Y” as a thinly-disguised fight over group status / politics” https://forum.effectivealtruism.org/posts/NfgMAS67nKTGzmQMB/x-distracts-from-y-as-a-thinly-disguised-fight-over-group In Effective Altruism Forum, 2023
- Steven Byrnes “Munk AI debate: confusions and possible cruxes” https://www.lesswrong.com/posts/LNwtnZ7MGTmeifkz3/munk-ai-debate-confusions-and-possible-cruxes In LessWrong, 2023
- “Case Law for AI Policy” http://social.cs.washington.edu/case-law-ai-policy/, 2023
- Quan Ze Chen and Amy X. Zhang “Case Law Grounding: Aligning Judgments of Humans and AI on Socially-Constructed Concepts”, 2023 arXiv:2310.07019 [cs.HC]
- Laurie Clarke “How Silicon Valley doomers are shaping Rishi Sunak’s AI plans” https://www.politico.eu/article/rishi-sunak-artificial-intelligence-pivot-safety-summit-united-kingdom-silicon-valley-effective-altruism/ In Politico, 2023
- Devin Coldewey “Ethicists fire back at ‘AI Pause’ letter they say ‘ignores the actual harms”’ https://techcrunch.com/2023/03/31/ethicists-fire-back-at-ai-pause-letter-they-say-ignores-the-actual-harms/ In TechCrunch, 2023
- “Collective Constitutional AI: Aligning a Language Model with Public Input” https://www.anthropic.com/index/collective-constitutional-ai-aligning-a-language-model-with-public-input, 2023
- Ajeya Cotra “Alignment researchers disagree a lot” https://www.planned-obsolescence.org/disagreement-in-alignment/ In Planned Obsolescence, 2023
- Tyler Cowen “What the Breathless AI ‘Extinction’ Warning Gets Wrong” https://www.bloomberg.com/opinion/articles/2023-06-02/is-ai-an-existential-risk-latest-warning-may-do-more-harm-than-good In Bloomberg, 2023
- Nello Cristianini “If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen” https://theconversation.com/if-were-going-to-label-ai-an-extinction-risk-we-need-to-clarify-how-it-could-happen-206738 In The Conversation, 2023
- John Davidson “Google Brain founder says big tech is lying about AI extinction danger” https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz In Australian Financial Review, 2023
- “Deliberation at Scale: Socially Democratic Inputs to AI” https://docs.google.com/document/d/1-2t3mFHzT3cJ7rKOw-dYX5pqoN-RxaJLl7bdXluh6y8/edit?usp=sharing, 2023
- “Democratic inputs to AI” https://openai.com/blog/democratic-inputs-to-ai, 2023
- “US public perception of CAIS statement and the risk of extinction” https://forum.effectivealtruism.org/posts/Rg7h7G3KTvaYEtL55/us-public-perception-of-cais-statement-and-the-risk-of In Effective Altruism Forum, 2023
- “Existing Policy Proposals Targeting Present and Future Harms” https://www.safe.ai/post/three-policy-proposals-for-ai-safety, 2023
- “Case Repositories: Towards Case-Based Reasoning for AI Alignment”, 2023 arXiv:2311.10934 [cs.AI]
- Morris P Fiorina “Unstable Majorities: Polarization, Party Sorting, and Political Stalemate” Hoover Press, 2017
- “Generative Social Choice”, 2023 arXiv:2309.01291 [cs.GT]
- Timnit Gebru and Émile P. Torres “Eugenics and the Promise of Utopia through Artificial General Intelligence” Recorded talk, https://youtu.be/P7XT4TWLzJw?si=noZ8LZgIKjy24-S2, 2023
- “Statement from the listed authors of Stochastic Parrots on the “AI pause”’ letter” https://www.dair-institute.org/blog/letter-statement-March2023/, 2023
- Edward Glaeser and Cass R. Sunstein “Does More Speech Correct Falsehoods?” In The Journal of Legal Studies 43.1, 2014, pp. 65–93 DOI: 10.1086/675247
- Seán Ó hÉigeartaigh “A note of caution about recent AI risk coverage” https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage In Effective Altruism Forum, 2023
- Emmie Hine “ER 19: On AI, extinction, and the existence spectrum” https://ethicalreckoner.substack.com/p/er-19-on-ai-extinction-and-the-existence In The Ethical Reckoner, 2023
- “How to reduce AI harms and future risks at once” https://www.simoninstitute.ch/blog/post/how-to-reduce-ai-harms-and-future-risks-at-once/, 2023
- “Introducing Democratic Fine-Tuning” https://meaningalignment.substack.com/p/introducing-democratic-fine-tuning, 2023
- “A misleading open letter about sci-fi AI dangers ignores the real risks” https://www.aisnakeoil.com/p/a-misleading-open-letter-about-sci In AI Snake Oil, 2023
- “Panic about overhyped AI risk could lead to the wrong kind of regulation” https://www.vox.com/future-perfect/2023/7/3/23779794/artificial-intelligence-regulation-ai-risk-congress-sam-altman-chatgpt-openai In Vox, 2023
- “Democratic Policy Development using Collective Dialogues and AI”, 2023 arXiv:2311.02242 [cs.CY]
- Seth Lazar, Jeremy Howard and Arvind Narayanan “Is Avoiding Extinction from AI Really an Urgent Priority?” https://www.fast.ai/posts/2023-05-31-extinction.html, 2023
- Future Life Institute “Pause Giant AI Experiments: An Open Letter”, https://futureoflife.org/open-letter/pause-giant-ai-experiments/, 2023
- Samara Linton “Tech Elite’s AI Ideologies Have Racist Foundations, Say AI Ethicists” https://peopleofcolorintech.com/articles/timnit-gebru-and-emile-torres-call-out-racist-roots-of-the-tech-elites-ai-ideologies/ In People of Color in Tech, 2023
- Connie Loizos “1,100+ notable signatories just signed an open letter asking ‘all AI labs to immediately pause for at least 6 months”’ https://techcrunch.com/2023/03/28/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months/ In TechCrunch, 2023
- Garrison Lovely “The Socialist Case for Longtermism” https://jacobin.com/2022/09/socialism-longtermism-effective-altruism-climate-ai In Jacobin, 2022
- Sasha Luccioni “The Call to Halt ‘Dangerous’ AI Research Ignores a Simple Truth” https://www.wired.com/story/the-call-to-halt-dangerous-ai-research-ignores-a-simple-truth/ In WIRED, 2023
- Lao Mein “I Think Eliezer Should Go on Glenn Beck” https://www.lesswrong.com/posts/FGWfTxsXk7euh4QGk/i-think-eliezer-should-go-on-glenn-beck In LessWrong, 2023
- Gemma B. Mendoza “Can we use AI to enrich democratic consultations?” https://www.rappler.com/technology/features/generative-ai-use-enriching-democratic-consultations/ In Rappler, 2023
- “Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society”’ https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html In The New York Times, 2023
- Michael Nielsen “Notes on Existential Risk from Artificial Superintelligence”, https://michaelnotebook.com/xrisk/index.html, 2023
- Lorena O’Neil “These Women Tried to Warn Us About AI” https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/ In Rolling Stone, 2023
- “OpenAI x DFT: The First Moral Graph” https://meaningalignment.substack.com/p/the-first-moral-graph, 2023
- Aviv Ovadya “Concepts & artifacts for AI-augmented democratic innovation: Process Cards, Run Reports, and more” https://reimagine.aviv.me/p/process-cards-and-run-reports In Reimagining Technology, 2023
- Aviv Ovadya “Deliberative Polls, Citizen Assemblies, and an Online Deliberation Platform” https://reimagine.aviv.me/p/deliberative-poll-vs-citizen-assembly-meta-pilot In Reimagining Technology, 2023
- Aviv Ovadya “Governance of AI, with AI, through deliberative democracy” https://reimagine.aviv.me/p/governance-of-ai-with-ai-through In Reimagining Technology, 2023
- Aviv Ovadya “How ‘Platform Democracy’ or ‘AI Democracy’ might interact with existing institutions” https://reimagine.aviv.me/p/how-platform-democracy-or-ai-democracy In Reimagining Technology, 2023
- Aviv Ovadya “Meta Ran a Giant Experiment in Governance. Now It’s Turning to AI” https://www.wired.com/story/meta-ran-a-giant-experiment-in-governance-now-its-turning-to-ai/ In WIRED, 2023
- Aviv Ovadya “Reimagining Democracy for AI” https://www.journalofdemocracy.org/articles/reimagining-democracy-for-ai/ In Journal of Democracy, 2023
- “Participatory AI Risk Prioritization: Alignment Assembly Report” https://cip.org/s/Participatory-AI-Risk-Prioritization-CIP.pdf, 2023
- Kari Paul “Letter signed by Elon Musk demanding AI research pause sparks controversy” https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt In The Guardian, 2023
- Kelsey Piper “There are two factions working to prevent AI dangers. Here’s why they’re deeply divided.” https://www.vox.com/future-perfect/2022/8/10/23298108/ai-dangers-ethics-alignment-present-future-risk In Vox, 2023
- “Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’20 New York, NY, USA: Association for Computing Machinery, 2020, pp. 138–143 DOI: 10.1145/3375627.3375803
- “The Illusion Of AI’s Existential Risk” https://www.noemamag.com/the-illusion-of-ais-existential-risk/ In Noēma, 2023
- Matt Ridley “The Rational Optimist” Harper, 2010
- Amanda Ripley “High Conflict” SimonSchuster, 2022
- Henrik Skaug Sætra and John Danaher “Resolving the battle of short- vs. long-term AI risks” https://doi.org/10.1007/s43681-023-00336-y In AI and Ethics, 2023
- “How Worried Should We Be About AI’s Threat to Humanity? Even Tech Leaders Can’t Agree” https://www.wsj.com/tech/ai/how-worried-should-we-be-about-ais-threat-to-humanity-even-tech-leaders-cant-agree-46c664b6, Permalink: https://archive.is/ieWiw In The Wall Street Journal, 2023
- Bruce Schneier “The A.I. Wars Have Three Factions, and They All Crave Power” https://www.nytimes.com/2023/09/28/opinion/ai-safety-ethics-effective.html In The New York Times, 2023
- “Aligned: A Platform-based Process for Alignment”, 2023 arXiv:2311.08706 [cs.CY]
- Charlotte Stix and Matthijs M. Maas “Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy” https://doi.org/10.1007/s43681-020-00037-w In AI and Ethics, 2021
- tante “PR as open letter”, https://tante.cc/2023/03/29/pr-as-open-letter/, 2023
- Nitasha Tiku “How elite schools like Stanford became fixated on the AI apocalypse” https://www.washingtonpost.com/technology/2023/07/05/ai-apocalypse-college-students/ In The Washington Post, 2023
- Émile P. Torres “Against longtermism” https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo In Aeon, 2021
- Émile P. Torres “Nick Bostrom, Longtermism, and the Eternal Return of Eugenics” https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/ In Truthdig, 2023
- Émile P. Torres “The Acronym Behind Our Wildest AI Dreams and Nightmares” https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/ In Truthdig, 2023
- Émile P. Torres “The Dangerous Ideas of “Longtermism” and “Existential Risk”” https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk In Current Affairs, 2021
- Değer Turan, Colleen McKenzie and Oliver Klingefjord “Mapping the Discourse on AI Safety & Ethics” https://ai.objectives.institute/blog/mapping-the-discourse-on-ai-safety-amp-ethics, 2023
- vTaiwan and Chatham House “Recursive Public”, https://www.recursivepublic.com/, 2023
- “Why AI Will Save the World” https://a16z.com/ai-will-save-the-world/, 2023
- Matteo Wong “AI Doomerism is a Decoy” https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/ In The Atlantic, 2023
- “Workshop on Sociotechnical AI Safety” https://hai.stanford.edu/workshop-sociotechnical-ai-safety, 2023
- Eliezer Yudkowsky “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down” https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ In Time, 2023
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.