Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ComPeer: A Generative Conversational Agent for Proactive Peer Support (2407.18064v2)

Published 25 Jul 2024 in cs.HC

Abstract: Conversational Agents (CAs) acting as peer supporters have been widely studied and demonstrated beneficial for people's mental health. However, previous peer support CAs either are user-initiated or follow predefined rules to initiate the conversations, which may discourage users to engage and build relationships with the CAs for long-term benefits. In this paper, we develop ComPeer, a generative CA that can proactively offer adaptive peer support to users. ComPeer leverages LLMs to detect and reflect significant events in the dialogue, enabling it to strategically plan the timing and content of proactive care. In addition, ComPeer incorporates peer support strategies, conversation history, and its persona into the generative messages. Our one-week between-subjects study (N=24) demonstrates ComPeer's strength in providing peer support over time and boosting users' engagement compared to a baseline user-initiated CA.

Citations (1)

Summary

  • The paper introduces a proactive, generative CA that autonomously detects significant dialogue events to initiate empathetic peer support.
  • It integrates long-term memory, event detection, and a generative dialogue architecture to adapt responses based on conversation history and support strategies.
  • A one-week study with 24 participants showed increased engagement and stress relief, despite challenges with tone consistency and contextual relevance.

An Evaluation of Proactive Conversational Agents for Peer Support

The paper "A Generative Conversational Agent for Proactive Peer Support" presents a novel approach to designing conversational agents (CAs) that can provide adaptive peer support. The paper primarily focuses on the challenges faced by traditional peer support systems where agents engage only reactively or according to predefined conversation rules. These traditional methods often limit the long-term engagement and relational depth essential for effective mental health support. The authors propose a model named , which leverages LLMs to create a proactive conversational agent, capable of initiating dialogue in a way that mirrors the dynamic and empathetic interactions provided by human counselors.

Key Contributions and Methodology

The researchers acknowledge the limitations of user-initiated conversational agents and introduce a system that autonomously detects significant events in dialogues, utilizing these insights to plan and generate proactive interactions. integrates peer support strategies, assimilates conversation history, and adapts potential responses in accordance with its programmed persona. A comprehensive one-week paper involving 24 participants demonstrated that this proactive approach enhances user engagement compared to a baseline, user-initiated CA.

The practical implications of this research are profound. Traditional CAs require user initiation or follow static rules, whereas dynamically interprets user inputs to maximize relational and supportive engagement. Its design includes several advanced components:

  1. Memory and Event Detection: Embedding long and short-term memory allows to retrieve relevant past interactions, enhancing its ability to conduct coherent, contextually aware conversations. The event detection mechanism identifies user experiences that warrant follow-up, enabling proactive dialogical engagement.
  2. Generative Dialogue Architecture: The adoption of LLMs facilitates the generation of adaptive, empathetic responses. The Dialogue Generation Module focuses on crafting messages that are consistent with persona, leveraging a subset of peer support strategies tailored for proactive care like self-disclosure and inquiry.
  3. Scheduling and Reflective Analysis: The Schedule Module orchestrates the timing and nature of sent messages, balancing user response with pre-generated plans derived from reflective assessments of user interactions.

Experimental Results and Findings

The paper reveals that participants interacting with reported more frequent engagement and a sustained perception of relief from stress. The proactive nature of was found to effectively provide social support without overwhelming users, fostering a sense of companionship and encouragement over the interaction period.

However, the evaluation also highlighted challenges regarding the consistency of message tone and relevance, with some users noting redundancy or perceived mechanical responses. Additionally, concerns about user privacy and unwarranted reliance on a synthetic agent were raised.

Implications and Future Directions

The research underlines the potential of LLMs in crafting proactive agents that provide subtle yet effective mental health support. By simulating the nuances of human interaction, such systems can potentially reach broader audiences, offering scalable solutions for those otherwise devoid of timely social support.

Looking forward, the authors propose integrating content moderation mechanisms to mitigate misinformation risks, improving event detection accuracy, and enhancing context-adaptive dialog generation. Future research could focus on long-term user studies to better understand the evolving dynamics of user-agent interactions and their psychological impact over extended real-world deployments.

In conclusion, this paper enriches the field of Human-AI Interaction by advancing the capabilities of conversational agents in offering meaningful and proactive support, marking a significant step toward bridging the gap between artificial and human-centric support systems.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com