AI Gossip (2508.08143v1)
Abstract: Generative AI chatbots like OpenAI's ChatGPT and Google's Gemini routinely make things up. They "hallucinate" historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles. Recently, some have argued that these hallucinations are better understood as bullshit. Chatbots produce rich streams of text that look truth-apt without any concern for the truthfulness of what this text says. But can they also gossip? We argue that they can. After some definitions and scene-setting, we focus on a recent example to clarify what AI gossip looks like before considering some distinct harms -- what we call "technosocial harms" -- that follow from it.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.