Long-term incognito human impersonation by LLM agents

Determine whether an autonomous large language model-driven chat agent can sustain a long-term incognito text-based relationship while successfully posing as a human and maintaining coherent persona and dialogue across multi-day interactions.

Background

The paper investigates whether organized romance-baiting scams can be automated with LLMs, and whether such automation can match human operators during the emotionally intensive trust-building phase.

Before presenting their seven-day conversation study, the authors explicitly note that an unresolved question is whether a LLM-driven agent can plausibly maintain a human persona and coherent, continuous engagement over an extended incognito interaction.

References

However, important uncertainties remain; specifically, it remains unclear (1) whether an LLM agent can sustain a long-term, incognito relationship, posing as human, while preserving coherence throughout the interaction (RQ3), and (2) whether LLMs can cultivate the exploitable trust on which such fraud schemes depend (RQ4).

Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams  (2512.16280 - Gressel et al., 18 Dec 2025) in Section 4 (Threat Validation: LLM Automation), opening paragraphs