Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ChatGPT and Bard Responses to Polarizing Questions (2307.12402v1)

Published 13 Jul 2023 in cs.CL

Abstract: Recent developments in natural language processing have demonstrated the potential of LLMs to improve a range of educational and learning outcomes. Of recent chatbots based on LLMs, ChatGPT and Bard have made it clear that AI technology will have significant implications on the way we obtain and search for information. However, these tools sometimes produce text that is convincing, but often incorrect, known as hallucinations. As such, their use can distort scientific facts and spread misinformation. To counter polarizing responses on these tools, it is critical to provide an overview of such responses so stakeholders can determine which topics tend to produce more contentious responses -- key to developing targeted regulatory policy and interventions. In addition, there currently exists no annotated dataset of ChatGPT and Bard responses around possibly polarizing topics, central to the above aims. We address the indicated issues through the following contribution: Focusing on highly polarizing topics in the US, we created and described a dataset of ChatGPT and Bard responses. Broadly, our results indicated a left-leaning bias for both ChatGPT and Bard, with Bard more likely to provide responses around polarizing topics. Bard seemed to have fewer guardrails around controversial topics, and appeared more willing to provide comprehensive, and somewhat human-like responses. Bard may thus be more likely abused by malicious actors. Stakeholders may utilize our findings to mitigate misinformative and/or polarizing responses from LLMs

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Abhay Goyal (9 papers)
  2. Muhammad Siddique (3 papers)
  3. Nimay Parekh (4 papers)
  4. Zach Schwitzky (1 paper)
  5. Clara Broekaert (1 paper)
  6. Connor Michelotti (1 paper)
  7. Allie Wong (1 paper)
  8. Lam Yin Cheung (6 papers)
  9. Robin O Hanlon (1 paper)
  10. Munmun De Choudhury (42 papers)
  11. Roy Ka-Wei Lee (68 papers)
  12. Navin Kumar (14 papers)
Citations (2)