Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MAiDE-up: Multilingual Deception Detection of GPT-generated Hotel Reviews (2404.12938v2)

Published 19 Apr 2024 in cs.CL and cs.AI

Abstract: Deceptive reviews are becoming increasingly common, especially given the increase in performance and the prevalence of LLMs. While work to date has addressed the development of models to differentiate between truthful and deceptive human reviews, much less is known about the distinction between real reviews and AI-authored fake reviews. Moreover, most of the research so far has focused primarily on English, with very little work dedicated to other languages. In this paper, we compile and make publicly available the MAiDE-up dataset, consisting of 10,000 real and 10,000 AI-generated fake hotel reviews, balanced across ten languages. Using this dataset, we conduct extensive linguistic analyses to (1) compare the AI fake hotel reviews to real hotel reviews, and (2) identify the factors that influence the deception detection model performance. We explore the effectiveness of several models for deception detection in hotel reviews across three main dimensions: sentiment, location, and language. We find that these dimensions influence how well we can detect AI-generated fake reviews.

Citations (3)

Summary

  • The paper introduces a novel MAiDE-up dataset of 20,000 hotel reviews across ten languages to benchmark deception detection.
  • The paper employs linguistic analysis tools like LIWC and XLM-RoBERTa to reveal that AI-generated reviews exhibit higher descriptive complexity and lower readability.
  • The paper finds that detection efficacy varies by language, underscoring the need for tailored multilingual models to counter deceptive online content.

Overview of the MAiDE-up Study on Multilingual Deception Detection

The research paper "MAiDE-up: Multilingual Deception Detection of GPT-generated Hotel Reviews" provides a comprehensive examination of AI-generated deceptive texts, focusing specifically on hotel reviews across ten languages. The paper addresses the increasing prevalence of AI-generated deceptive content, catalyzed by advancements in LLMs such as GPT-4. This research serves as a critical evaluation of how these AI-generated reviews compare linguistically with genuine reviews and how models can be utilized to detect such deception effectively.

Methodology and Dataset

The authors compile a novel dataset called MAiDE-up, consisting of 20,000 hotel reviews - evenly split between real and GPT-generated reviews - across ten languages. These reviews are balanced by location, sentiment, and language, ensuring a comprehensive analysis. The paper describes the meticulous process of collecting real reviews from Booking.com, ensuring linguistic quality and authenticity, while AI-generated reviews are crafted using GPT-4, following a detailed prompt design to simulate realistic writing styles common to human-generated reviews.

Linguistic Analysis

The paper offers extensive linguistic analyses comparing syntactic and lexical elements of AI-generated reviews with real ones. Key areas of investigation include analytic writing, descriptiveness, readability, and topic modeling. Notably, AI-generated texts tend to exhibit a higher level of complexity, more frequent use of descriptive adjectives, and lower readability compared to real reviews. These attributes are systematically analyzed using the Linguistic Inquiry and Word Count (LIWC) tool for certain languages and other multilingual libraries for additional linguistic metrics.

Deception Detection Models

To investigate the feasibility of detecting AI-generated deception, the researchers evaluate several models:

  • Random Classifier as a baseline.
  • Naive Bayes Classifier for a simple interpretable model.
  • XLM-RoBERTa, a more robust and accurate model for multilingual text classification.

The XLM-RoBERTa model demonstrates significant efficacy, achieving high accuracy rates in distinguishing AI-generated content from real reviews, leveraging nuanced differences in linguistic style and structure.

Experimental Results and Implications

The findings reveal that language influences the detectability of AI-generated content, with AI proving most adept at generating deceptive English and Korean reviews while struggling with German and Romanian. The paper highlights that GPT-4’s efficacy is not uniform across languages and is influenced by factors such as the geographical location of hotels and the sentiment polarity of the reviews.

The research holds significant practical implications. It underscores the necessity for multilingual models capable of distinguishing between AI-generated and genuine content, thus safeguarding the integrity of online platforms that rely on user-generated reviews. By emphasizing the potential for finely-tuned models to accurately detect AI-generated deception, this paper presents a pathway for developing more sophisticated and reliable AI detection systems.

Future Research Directions

The paper opens multiple avenues for future work, including refining models to improve robustness across varied contexts and languages, exploring the role of cultural and contextual nuances in deception detection, and further understanding the interplay between review sentiment and detection efficacy. There is also potential for expanding this research beyond hotels to other sectors where trust in user-generated content is paramount.

In conclusion, the "MAiDE-up" paper provides a rigorous analysis and insightful contributions to the field of AI-generated text detection, emphasizing the importance of multilingual research and development in combating potential misuse of LLMs. As the capabilities of LLMs continue to evolve, research such as this will be crucial in ensuring these technologies are utilized ethically and transparently.