Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Effects of Persuasive Dialogues: Testing Bot Identities and Inquiry Strategies (2001.04564v2)

Published 13 Jan 2020 in cs.HC

Abstract: Intelligent conversational agents, or chatbots, can take on various identities and are increasingly engaging in more human-centered conversations with persuasive goals. However, little is known about how identities and inquiry strategies influence the conversation's effectiveness. We conducted an online study involving 790 participants to be persuaded by a chatbot for charity donation. We designed a two by four factorial experiment (two chatbot identities and four inquiry strategies) where participants were randomly assigned to different conditions. Findings showed that the perceived identity of the chatbot had significant effects on the persuasion outcome (i.e., donation) and interpersonal perceptions (i.e., competence, confidence, warmth, and sincerity). Further, we identified interaction effects among perceived identities and inquiry strategies. We discuss the findings for theoretical and practical implications for developing ethical and effective persuasive chatbots. Our published data, codes, and analyses serve as the first step towards building competent ethical persuasive chatbots.

Analyzing the Effects of Bot Identity and Inquiry Strategies on Persuasive Dialogues

The paper "Effects of Persuasive Dialogues: Testing Bot Identities and Inquiry Strategies" provides a meticulously designed empirical paper exploring the interactions between chatbot identities and inquiry strategies in a persuasive setting. Specifically, the paper examines these factors in the context of convincing users to donate to a charity within a constrained environment featuring an intelligent conversational agent.

Experimental Design

The research involved an online paper with 790 participants subjected to a two-by-four factorial design. Participants were asked to interact with a chatbot designed to persuade them to donate to a charity. The experiment manipulated two primary factors: perceived chatbot identity (human-like name vs. clear bot identity) and inquiry strategies (combinations of personal and non-personal inquiries).

Key Findings

The paper reports significant findings regarding both the main effects and interactions between the bot identity and inquiry strategies:

  1. Main Effect of Perceived Identity: Participants perceiving the chatbot as human exhibited a higher likelihood of donating. This finding challenges the Computers Are Social Actors (CASA) paradigm, which posits that users apply social norms similarly to computers and humans, demonstrating that human-like perception boosts persuasion effectiveness.
  2. Inquiry Strategy Effect: Personal inquiries led to better outcomes when users perceived the chatbot as human. This supports the hypothesis that personalized interaction increases engagement and persuasive power.
  3. Interaction Effect: An intriguing interaction was found between perceived identity and inquiry type. Participants who viewed a bot labeled "Jessie (bot)" as human exhibited discomfort and reduced donation willingness when personal inquiries were made, invoking the Uncanny Valley effect.

These findings highlight the complexity of human perceptions in human-computer interaction, especially in persuasive contexts. Perception inconsistencies, where participants misidentified the bot's intended identity, further complicated these effects, suggesting that the disclosed identity influences outcomes only in some contexts.

Implications for AI Development

The paper's outcomes have practical implications for the design and deployment of persuasive chatbots. In contexts where persuasion is desirable such as fundraising or health behavior change, employing human-like contexts with careful identity management could yield better engagement and persuasion results. However, ethical concerns arise around transparency and user autonomy, emphasizing the importance of clear identity disclosure in maintaining ethical standards. This aligns with regulations such as California's Autobot Law, advocating clear disclosure of bot identities in interactions.

Future Directions

The research opens several avenues for future exploration. Enhancing chatbot capabilities with more sophisticated natural response algorithms would likely improve participant impressions and engagement. Further studies could examine longer conversation lengths and broader contexts to assess how these factors play out across different interaction environments. Additionally, exploring the ethical boundaries in persuasive bot design, especially with the increasing capabilities of AI systems, represents a crucial area of investigation.

Conclusion

In summary, this paper provides valuable insights into chatbot design, emphasizing how perceived human-like identities and inquiry personalization can be optimized for effective persuasion in digital communication channels. However, ethical considerations and clear identity disclosures remain paramount in leveraging these findings responsibly. As AI continues to evolve, future research must carefully balance technological advancement with user trust and ethical transparency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Weiyan Shi (41 papers)
  2. Xuewei Wang (14 papers)
  3. Yoo Jung Oh (3 papers)
  4. Jingwen Zhang (54 papers)
  5. Saurav Sahay (34 papers)
  6. Zhou Yu (206 papers)
Citations (72)
Youtube Logo Streamline Icon: https://streamlinehq.com