Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A framework for fake review detection in online consumer electronics retailers (1903.12452v1)

Published 29 Mar 2019 in cs.CL

Abstract: The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82% F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test.

Citations (207)

Summary

  • The paper introduces the F3 framework to detect fake reviews by analyzing user and review features, addressing a gap in the consumer electronics domain.
  • It employs a unique dataset from Yelp across four major U.S. cities and compares multiple machine learning models, with AdaBoost achieving an 82% F-Score.
  • The study demonstrates that combining user-centric features like Reviewing Activity and Trust with social data significantly enhances fake review detection accuracy.

A Framework for Fake Review Detection in Online Consumer Electronics Retailers

The paper "A Framework for Fake Review Detection in Online Consumer Electronics Retailers" addresses the significant issue of fraudulent online reviews in the consumer electronics domain. Previous research has extensively focused on fake reviews in the hospitality industry, particularly hotels and restaurants, but this work identifies a gap in the exploration of the consumer electronics sector, which is economically impactful and heavily reliant on word-of-mouth (WOM) information.

The authors propose the Fake Feature Framework (F3), a robust methodology designed for the detection of fake reviews, grounded in user-centric and review-centric feature analysis. The framework introduces four primary categories of user-centric features: Personal, Social, Reviewing Activity, and Trust. Personal features pertain to user profile information, while the Social category explores interactions and network activities. Reviewing Activity includes metrics on user review behavior, and Trust features aim to identify inconsistencies or abnormal users' actions. Review-centric features focus on the textual content of reviews.

A key contribution of this paper is the development and deployment of a dataset specific to the consumer electronics industry, gathered from the Yelp platform. The authors employed web scraping techniques to compile reviews across four major U.S. cities: New York, Los Angeles, Miami, and San Francisco. The dataset was balanced in terms of trustful and fake reviews, a necessary condition for reliable training and testing of machine learning models in fraud detection.

The paper evaluates several machine learning algorithms, including Logistic Regression, Decision Tree, Random Forest, Gaussian Naive Bayes, and AdaBoost, to determine the most effective approach for classifying fake reviews. The AdaBoost classifier demonstrated superior performance, achieving an 82% F-Score, and was found to be statistically the best among the classifiers according to the Friedman test.

Notably, the research highlights the limited utility of textual features alone, which align with findings from other domains indicating the sophistication of fake reviewers in disguising fraudulent content. User-centric features showed significant promise, particularly Reviewing Activity and Trust features. The integration of social features also contributed additional discriminatory power, highlighting the potential advantages of incorporating network data into fraud detection models.

The implications of this research extend beyond the consumer electronics industry, offering methodological insights that could be applied to other domains susceptible to review manipulation. As online consumer reviews continue to influence purchasing decisions across sectors, the ability to accurately detect and mitigate fake reviews becomes increasingly vital. Future research could explore the augmentation of this framework with advanced natural language processing techniques or extend it into related fields, such as social media platforms, where user interactions can be equally indicative of deceptive behaviors.

Overall, this paper provides a substantial contribution to the literature on digital fraud detection and offers a comprehensive, feature-driven approach to addressing the challenge of fake reviews in the consumer electronics retail sector.