Emma

Summary:

  • Large Language Models (LLMs) are vulnerable to indirect prompt injection attacks, which can exploit their integration into various applications.
  • Attackers can remotely manipulate LLM-integrated applications, leading to data theft, worming, information ecosystem contamination, and other security risks.

Tags:

Research