Cultivation of exploitable trust by LLM agents in romance-baiting

Establish whether large language model-driven chat agents can cultivate the exploitable emotional trust necessary for romance-baiting scams during prolonged online conversations with human partners.

Background

A central risk explored by the paper is that LLM agents may surpass human operators in building the emotional trust required for romance-baiting scams, potentially increasing compliance during later exploitation stages.

Prior to reporting empirical results, the authors explicitly state uncertainty about whether LLMs can generate the level of exploitable trust on which these scams depend, motivating their long-horizon study.

References

However, important uncertainties remain; specifically, it remains unclear (1) whether an LLM agent can sustain a long-term, incognito relationship, posing as human, while preserving coherence throughout the interaction (RQ3), and (2) whether LLMs can cultivate the exploitable trust on which such fraud schemes depend (RQ4).

Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams  (2512.16280 - Gressel et al., 18 Dec 2025) in Section 4 (Threat Validation: LLM Automation), opening paragraphs