Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fairness-Aware Meta-Learning via Nash Bargaining (2406.07029v1)

Published 11 Jun 2024 in cs.LG

Abstract: To address issues of group-level fairness in machine learning, it is natural to adjust model parameters based on specific fairness objectives over a sensitive-attributed validation set. Such an adjustment procedure can be cast within a meta-learning framework. However, naive integration of fairness goals via meta-learning can cause hypergradient conflicts for subgroups, resulting in unstable convergence and compromising model performance and fairness. To navigate this issue, we frame the resolution of hypergradient conflicts as a multi-player cooperative bargaining game. We introduce a two-stage meta-learning framework in which the first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model toward the Pareto front, and the second stage optimizes with respect to specific fairness goals. Our method is supported by theoretical results, notably a proof of the NBS for gradient aggregation free from linear independence assumptions, a proof of Pareto improvement, and a proof of monotonic improvement in validation loss. We also show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yi Zeng (153 papers)
  2. Xuelin Yang (5 papers)
  3. Li Chen (590 papers)
  4. Cristian Canton Ferrer (32 papers)
  5. Ming Jin (130 papers)
  6. Michael I. Jordan (438 papers)
  7. Ruoxi Jia (88 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets