Mitigating hallucinations and misinformation in persuasive language agents
Develop robust and empirically validated methods to mitigate large language model hallucinations, misinformation, and other unintended consequences when deploying LLM-based persuasive language agents, and ascertain their effectiveness in real-world automated marketing contexts such as AI-generated real estate listing descriptions.
Sponsor
References
From an ethical standpoint, we recognize the potential risks of deploying persuasive language agents, particularly regarding LLM hallucinations and misinformation. To address this, we conduct a fine-grained fact-checking analysis (see \cref{sec: exp_hallucination_verification}) and find no substantial hallucination risks in our designed agents. However, we acknowledge that this remains an open challenge and encourage further investigations into mitigating potential unintended consequences.