Cultivation of exploitable trust by LLM agents in romance-baiting
Establish whether large language model-driven chat agents can cultivate the exploitable emotional trust necessary for romance-baiting scams during prolonged online conversations with human partners.
References
However, important uncertainties remain; specifically, it remains unclear (1) whether an LLM agent can sustain a long-term, incognito relationship, posing as human, while preserving coherence throughout the interaction (RQ3), and (2) whether LLMs can cultivate the exploitable trust on which such fraud schemes depend (RQ4).
— Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams
(2512.16280 - Gressel et al., 18 Dec 2025) in Section 4 (Threat Validation: LLM Automation), opening paragraphs