Online Algorithmic Recourse by Collective Action (2401.00055v1)
Abstract: Research on algorithmic recourse typically considers how an individual can reasonably change an unfavorable automated decision when interacting with a fixed decision-making system. This paper focuses instead on the online setting, where system parameters are updated dynamically according to interactions with data subjects. Beyond the typical individual-level recourse, the online setting opens up new ways for groups to shape system decisions by leveraging the parameter update rule. We show empirically that recourse can be improved when users coordinate by jointly computing their feature perturbations, underscoring the importance of collective action in mitigating adverse automated decisions.
- Politics of adversarial machine learning. In Towards Trustworthy ML: Rethinking Security and Privacy for ML Workshop, Eighth International Conference on Learning Representations (ICLR), 2020.
- Lowkey: leveraging adversarial attacks to protect social media users from facial recognition. 2021.
- Multi-objective counterfactual explanations. In International Conference on Parallel Problem Solving from Nature, pp. 448–469. Springer, 2020.
- Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In Proceedings of the 28th USENIX Conference on Security Symposium, SEC’19, pp. 321–338, USA, 2019. USENIX Association. ISBN 9781939133069.
- Witches’ brew: Industrial scale data poisoning via gradient matching. In International Conference on Learning Representations, 2021.
- Generative adversarial nets. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf.
- Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science, pp. 111–122, 2016.
- Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 611–620, 2013.
- A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. arXiv preprint arXiv:2010.04050, 2020.
- Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pp. 1885–1894. PMLR, 2017.
- Pots: protective optimization technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 177–188, 2020.
- Stochastic hyperparameter optimization through hypernetworks. arXiv preprint arXiv:1802.09419, 2018.
- Gradient-based hyperparameter optimization through reversible learning. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 2113–2122, Lille, France, 07–09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/maclaurin15.html.
- The social cost of strategic classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, pp. 230–239, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450361255. doi: 10.1145/3287560.3287576. URL https://doi.org/10.1145/3287560.3287576.
- Performative prediction. In International Conference on Machine Learning, pp. 7599–7609. PMLR, 2020.
- Face: Feasible and actionable counterfactual explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 344–350, 2020.
- Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. Advances in Neural Information Processing Systems, 33, 2020.
- Poison frogs! targeted clean-label poisoning attacks on neural networks. In Neural Information Processing Systems, 2018.
- Fawkes: Protecting privacy against unauthorized deep learning models. In 29th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 20), pp. 1589–1604, 2020.
- Prototypical networks for few-shot learning. In Neural Information Processing Systems, 2017.
- Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 10–19, 2019.
- The philosophical basis of algorithmic recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, pp. 284–293, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450369367. doi: 10.1145/3351095.3372876. URL https://doi.org/10.1145/3351095.3372876.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.