Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Feature Inference Attack on Model Predictions in Vertical Federated Learning (2010.10152v3)

Published 20 Oct 2020 in cs.LG and cs.DB

Abstract: Federated learning (FL) is an emerging paradigm for facilitating multiple organizations' data collaboration without revealing their private data to each other. Recently, vertical FL, where the participating organizations hold the same set of samples but with disjoint features and only one organization owns the labels, has received increased attention. This paper presents several feature inference attack methods to investigate the potential privacy leakages in the model prediction stage of vertical FL. The attack methods consider the most stringent setting that the adversary controls only the trained vertical FL model and the model predictions, relying on no background information. We first propose two specific attacks on the logistic regression (LR) and decision tree (DT) models, according to individual prediction output. We further design a general attack method based on multiple prediction outputs accumulated by the adversary to handle complex models, such as neural networks (NN) and random forest (RF) models. Experimental evaluations demonstrate the effectiveness of the proposed attacks and highlight the need for designing private mechanisms to protect the prediction outputs in vertical FL.

Citations (192)

Summary

  • The paper analyzes feature inference attacks in vertical federated learning (VFL) where an adversary infers private features from model predictions, highlighting significant privacy risks during the prediction phase.
  • It introduces specific attack methodologies (ESA, PRA, GRNA) tailored for different VFL model types like logistic regression, decision trees, and neural networks, demonstrating their varying efficacy based on model structure and data conditions.
  • Experimental evaluations confirm the effectiveness of these attacks, emphasizing the urgent need for developing and implementing more robust privacy-preserving mechanisms beyond existing techniques to protect sensitive features in VFL systems.

Feature Inference Attack on Model Predictions in Vertical Federated Learning

The paper presents a comprehensive analysis of feature inference attacks in the context of vertical federated learning (VFL), where multiple organizations collaboratively build machine learning models using data partitioned by features. In such settings, data privacy remains a primary concern, particularly during the model prediction phase. The authors focus on investigating privacy leakages that arise when an adversary, with access solely to the trained VFL model and the model predictions, aims to infer the feature values of a passive party.

Attack Methodologies

The paper introduces several attack methods, distinguishing between individual and multiple prediction scenarios, each tailored to specific model types such as logistic regression (LR), decision trees (DT), neural networks (NN), and random forest (RF) models.

  1. Equality Solving Attack (ESA): This method is applicable to logistic regression models. It leverages the deterministic nature of LR model predictions to form a set of equations involving the adversary's known features and prediction outputs. The main insight is to exploit these equations to infer unknown feature values of the passive party. Particularly effective when the number of unknown features is less than or equal to the number of classes minus one, ESA can exactly recover unknown features.
  2. Path Restriction Attack (PRA): Designed for decision tree models, PRA uses the predicted class and the adversary's feature values to limit the possible decision paths within the tree. This approach significantly reduces the set of candidate feature values for the passive party, thereby enhancing inference accuracy.
  3. Generative Regression Network Attack (GRNA): Developed for handling complex models such as neural networks and random forests, GRNA exploits multiple prediction outputs to infer feature correlations between adversarial and target features. The attack employs a generative model to iteratively refine the estimation of unknown features based on accumulated prediction data and known correlations.

Experimental Evaluation

The authors evaluate these attacks on real-world datasets, demonstrating their efficacy under varying conditions. Some notable observations include:

  • ESA achieves zero mean square error (MSE) when the condition dtargetc1d_{\text{target}} \leq c - 1 is satisfied, showcasing its precision under specific configurations.
  • PRA provides high accuracy in path prediction within decision trees by successfully narrowing down potential branches.
  • GRNA outperforms baseline methods, especially for neural network models, by utilizing learned correlations from a series of predictions to reconstruct unknown features effectively.

Implications and Future Directions

The paper highlights significant privacy concerns in VFL, emphasizing the necessity for robust privacy-preserving mechanisms. Key implications include:

  • Cryptographic techniques used during training and prediction need supplementation with methods that ensure the obfuscation of prediction outputs.
  • The research prompts the development of new defensive strategies, such as rounding prediction confidences, implementing dropout for neural networks, and employing secure post-processing verification steps before releasing outputs.
  • The analysis offers insights into future AI developments, suggesting an integration of differential privacy with federated systems to provide stronger privacy guarantees without undermining model utility.

This research underscores the complexity of safeguarding data privacy in federated learning contexts while maintaining collaborative benefits for participating organizations. The need for refining privacy-preserving technological frameworks remains a pressing challenge as federated learning continues to gain traction across various applications.