Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Poisoning Attacks to Graph-Based Recommender Systems (1809.04127v1)

Published 11 Sep 2018 in cs.IR, cs.CR, cs.LG, and stat.ML

Abstract: Recommender system is an important component of many web services to help users locate items that match their interests. Several studies showed that recommender systems are vulnerable to poisoning attacks, in which an attacker injects fake data to a given system such that the system makes recommendations as the attacker desires. However, these poisoning attacks are either agnostic to recommendation algorithms or optimized to recommender systems that are not graph-based. Like association-rule-based and matrix-factorization-based recommender systems, graph-based recommender system is also deployed in practice, e.g., eBay, Huawei App Store. However, how to design optimized poisoning attacks for graph-based recommender systems is still an open problem. In this work, we perform a systematic study on poisoning attacks to graph-based recommender systems. Due to limited resources and to avoid detection, we assume the number of fake users that can be injected into the system is bounded. The key challenge is how to assign rating scores to the fake users such that the target item is recommended to as many normal users as possible. To address the challenge, we formulate the poisoning attacks as an optimization problem, solving which determines the rating scores for the fake users. We also propose techniques to solve the optimization problem. We evaluate our attacks and compare them with existing attacks under white-box (recommendation algorithm and its parameters are known), gray-box (recommendation algorithm is known but its parameters are unknown), and black-box (recommendation algorithm is unknown) settings using two real-world datasets. Our results show that our attack is effective and outperforms existing attacks for graph-based recommender systems. For instance, when 1% fake users are injected, our attack can make a target item recommended to 580 times more normal users in certain scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Minghong Fang (34 papers)
  2. Guolei Yang (1 paper)
  3. Neil Zhenqiang Gong (117 papers)
  4. Jia Liu (369 papers)
Citations (190)

Summary

Poisoning Attacks to Graph-Based Recommender Systems

The paper "Poisoning Attacks to Graph-Based Recommender Systems" explores the vulnerabilities of graph-based recommender systems, which are increasingly utilized by web services for recommending items like products, videos, and news. The authors present a systematic approach to devising optimized poisoning attacks specifically tailored to graph-based recommender systems, addressing an open problem in the field of adversarial machine learning.

Graph-based recommender systems, distinct from matrix-factorization or association-rule-based systems, employ a user preference graph to model the relationship between users and items based on rating scores. This system relies on the stationary probabilities derived from a random walk in the graph to make recommendations. An attacker can exploit this design by introducing fake users with strategically crafted rating scores to manipulate the recommendation outcomes, promoting a target item to a broader user base.

The core challenge in implementing poisoning attacks lies in determining the assignment of rating scores to fake users such that a target item achieves maximum exposure among normal users. To tackle this, the authors conceptualize the problem as an optimization task, aiming to maximize the hit ratio — the proportion of normal users whose top recommended items include the target item. However, solving it exactly is computationally prohibitive due to its complexity and the discrete nature of rating scores. Thus, the authors propose an approximate method using continuous relaxations coupled with projected gradient descent.

Empirical evaluations on real-world datasets, MovieLens and Amazon Instant Video, demonstrate the efficacy of the proposed attacks under various knowledge settings: white-box, gray-box, and black-box. Their attack methodology significantly outperforms traditional methods such as random, average, bandwagon, and co-visitation attacks, making a target item recommended to substantially more users—by up to 580 times for unpopular items under particular configurations.

The paper also investigates the prospect of detecting fake users using machine learning classifiers trained on features extracted from user rating profiles. Although a substantial fraction of fake users can be predicted, the detection is not entirely reliable, with considerable false negatives allowing many fake users to remain undetected, still affecting recommendation outcomes.

The implications of this paper are profound both practically and theoretically. Practically, services employing graph-based recommender systems need to devise robust defenses against such poisoning attacks, potentially integrating stringent detection mechanisms and anomaly analyses. Theoretically, the findings stimulate advancement in the field of adversarial attacks against recommender systems, indicating a need for further research in optimizing attack strategies and improving defense mechanisms.

In future developments, the authors suggest extending the work to other graph-based systems such as those utilizing graph convolutional networks or exploring poisoning attacks on neural network-based recommender systems. Additionally, devising effective defenses against these sophisticated adversarial threats remains a vital area of ongoing research, crucial for safeguarding the integrity and reliability of recommender systems in practical applications.