Papers
Topics
Authors
Recent
2000 character limit reached

Promoting Fairness in GNNs: A Characterization of Stability (2309.03648v3)

Published 7 Sep 2023 in cs.LG, cs.AI, and cs.CY

Abstract: The Lipschitz bound, a technique from robust statistics, can limit the maximum changes in the output concerning the input, taking into account associated irrelevant biased factors. It is an efficient and provable method for examining the output stability of machine learning models without incurring additional computation costs. Recently, Graph Neural Networks (GNNs), which operate on non-Euclidean data, have gained significant attention. However, no previous research has investigated the GNN Lipschitz bounds to shed light on stabilizing model outputs, especially when working on non-Euclidean data with inherent biases. Given the inherent biases in common graph data used for GNN training, it poses a serious challenge to constraining the GNN output perturbations induced by input biases, thereby safeguarding fairness during training. Recently, despite the Lipschitz constant's use in controlling the stability of Euclideanneural networks, the calculation of the precise Lipschitz constant remains elusive for non-Euclidean neural networks like GNNs, especially within fairness contexts. To narrow this gap, we begin with the general GNNs operating on an attributed graph, and formulate a Lipschitz bound to limit the changes in the output regarding biases associated with the input. Additionally, we theoretically analyze how the Lipschitz constant of a GNN model could constrain the output perturbations induced by biases learned from data for fairness training. We experimentally validate the Lipschitz bound's effectiveness in limiting biases of the model output. Finally, from a training dynamics perspective, we demonstrate why the theoretical Lipschitz bound can effectively guide the GNN training to better trade-off between accuracy and fairness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. On Lipschitz regularization of convolutional layers using toeplitz matrix theory. In AAAI Conference on Artificial Intelligence, 2021.
  2. Compositional fairness constraints for graph embeddings. In International Conference on Machine Learning, 2019.
  3. Debayes: a bayesian method for debiasing network embeddings. In International Conference on Machine Learning, 2020.
  4. Characterizing the influence of graph elements. In International Conference on Learning Representations, 2023.
  5. Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In ACM International Conference on Web Search and Data Mining, 2021.
  6. Lipschitz normalization for self-attention layers with application to graph neural networks. In International Conference on Machine Learning, 2021.
  7. Individual fairness for graph neural networks: A ranking based approach. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2021.
  8. Graph neural networks for social recommendation. In International World Wide Web Conference, 2019.
  9. Distributed linear-quadratic control with graph neural networks. Signal Processing, 196:108506, 2022.
  10. A new model for learning in graph domains. In IEEE International Joint Conference on Neural Networks, 2005.
  11. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, 2017.
  12. Equality of opportunity in supervised learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, 2016.
  13. Graph-enhanced multi-task learning of multi-level transition dynamics for session-based recommendation. In AAAI Conference on Artificial Intelligence, 2021.
  14. Graphlime: Local interpretable model explanations for graph neural networks. IEEE Transactions on Knowledge and Data Engineering, 2022.
  15. Label informed attributed network embedding. In ACM International Conference on Web Search and Data Mining, 2017.
  16. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems, 20(4):422–446, 2002.
  17. Inform: Individual fairness on graph mining. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2020.
  18. The Lipschitz constant of self-attention. In International Conference on Machine Learning, 2021.
  19. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
  20. Variational graph auto-encoders. In NIPS Workshop on Bayesian Deep Learning, 2016.
  21. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
  22. ifair: Learning individually fair data representations for algorithmic decision making. In International Conference on Data Engineering, 2019.
  23. Operationalizing individual fairness with pairwise fair representations. In VLDB Endowment, 2019.
  24. Learning to discover social circles in ego networks. In Advances in Neural Information Processing Systems, 2012.
  25. Learning graph-level representation for drug discovery. arXiv preprint arXiv:1709.03741, 2017.
  26. On dyadic fairness: Exploring and mitigating bias in graph connections. In International Conference on Learning Representations, 2021.
  27. Gated graph sequence neural networks. International Conference on Learning Representations, 2016.
  28. Information obfuscation of graph neural networks. In International Conference on Machine Learning, 2021.
  29. Are defenses for graph neural networks robust? In Advances in Neural Information Processing Systems, 2022.
  30. Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning, 2010.
  31. Debiasing graph embeddings with metadata-orthogonal training. In Advances in Social Network Analysis and Mining, 2020.
  32. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, 2019.
  33. Fairwalk: Towards fair graph embedding. In International Joint Conference on Artificial Intelligence, pages 3289–3295, 2019.
  34. Fairwalk: Towards fair graph embedding. In International Joint Conference on Artificial Intelligence, 2019.
  35. Mariia Rizun. Knowledge graph application in education: a literature review. Acta Universitatis Lodziensis. Folia Oeconomica, 2019.
  36. Graph neural networks for ranking web pages. In IEEE/WIC/ACM International Conference on Web Intelligence, 2005.
  37. Help me find a job: A graph-based approach for job recommendation at scale. In IEEE International Conference on Big Data, 2017.
  38. Pitfalls of graph neural network evaluation. In Relational Representation Learning Workshop, NeurIPS, 2018.
  39. Graph mining: procedure, application to drug discovery and recent advances. Drug Discovery Today, 2013.
  40. Arnetminer: extraction and mining of academic social networks. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2008.
  41. Relational learning via latent social dimensions. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2009.
  42. Building firmly nonexpansive convolutional neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2020.
  43. Acekg: A large-scale knowledge graph for academic data mining. In ACM International Conference on Information and Knowledge Management, 2018.
  44. Simplifying graph convolutional networks. In International Conference on Machine Learning, 2019.
  45. Learning fair representations for recommendation: A graph-based perspective. In International World Wide Web Conference, 2021.
  46. How powerful are graph neural networks? In International Conference on Learning Representations, 2019.
  47. Gnnexplainer: Generating explanations for graph neural networks. In Advances in Neural Information Processing Systems, 2019.
  48. On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning, 2021.
  49. Learning fair representations. In International Conference on Machine Learning, 2013.
  50. Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems, 2018.
  51. On Lipschitz bounds of general convolutional neural networks. IEEE Transactions on Information Theory, 66(3):1738–1759, 2019.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.