Mitigating hallucinations and misinformation in persuasive language agents

Develop robust and empirically validated methods to mitigate large language model hallucinations, misinformation, and other unintended consequences when deploying LLM-based persuasive language agents, and ascertain their effectiveness in real-world automated marketing contexts such as AI-generated real estate listing descriptions.

Background

The paper proposes an agentic framework (AI Realtor) for grounded persuasive language generation in automated marketing, with real estate listings as the testbed. While the system demonstrates superior human-subject performance and includes fact-checking to minimize hallucinations, the authors emphasize ethical risks associated with deploying persuasive language agents.

Specifically, they highlight ongoing concerns around LLM hallucinations and misinformation. Despite conducting fine-grained fact-checking and reporting minimal hallucination in their agent, they explicitly note that mitigating such risks and broader unintended consequences remains an open challenge, calling for further investigations.

References

From an ethical standpoint, we recognize the potential risks of deploying persuasive language agents, particularly regarding LLM hallucinations and misinformation. To address this, we conduct a fine-grained fact-checking analysis (see \cref{sec: exp_hallucination_verification}) and find no substantial hallucination risks in our designed agents. However, we acknowledge that this remains an open challenge and encourage further investigations into mitigating potential unintended consequences.

Grounded Persuasive Language Generation for Automated Marketing (2502.16810 - Wu et al., 24 Feb 2025) in Impact Statement