Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Online Algorithmic Recourse by Collective Action (2401.00055v1)

Published 29 Dec 2023 in cs.LG

Abstract: Research on algorithmic recourse typically considers how an individual can reasonably change an unfavorable automated decision when interacting with a fixed decision-making system. This paper focuses instead on the online setting, where system parameters are updated dynamically according to interactions with data subjects. Beyond the typical individual-level recourse, the online setting opens up new ways for groups to shape system decisions by leveraging the parameter update rule. We show empirically that recourse can be improved when users coordinate by jointly computing their feature perturbations, underscoring the importance of collective action in mitigating adverse automated decisions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Politics of adversarial machine learning. In Towards Trustworthy ML: Rethinking Security and Privacy for ML Workshop, Eighth International Conference on Learning Representations (ICLR), 2020.
  2. Lowkey: leveraging adversarial attacks to protect social media users from facial recognition. 2021.
  3. Multi-objective counterfactual explanations. In International Conference on Parallel Problem Solving from Nature, pp.  448–469. Springer, 2020.
  4. Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In Proceedings of the 28th USENIX Conference on Security Symposium, SEC’19, pp.  321–338, USA, 2019. USENIX Association. ISBN 9781939133069.
  5. Witches’ brew: Industrial scale data poisoning via gradient matching. In International Conference on Learning Representations, 2021.
  6. Generative adversarial nets. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf.
  7. Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science, pp.  111–122, 2016.
  8. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems, pp.  611–620, 2013.
  9. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. arXiv preprint arXiv:2010.04050, 2020.
  10. Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pp. 1885–1894. PMLR, 2017.
  11. Pots: protective optimization technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp.  177–188, 2020.
  12. Stochastic hyperparameter optimization through hypernetworks. arXiv preprint arXiv:1802.09419, 2018.
  13. Gradient-based hyperparameter optimization through reversible learning. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp.  2113–2122, Lille, France, 07–09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/maclaurin15.html.
  14. The social cost of strategic classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, pp.  230–239, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450361255. doi: 10.1145/3287560.3287576. URL https://doi.org/10.1145/3287560.3287576.
  15. Performative prediction. In International Conference on Machine Learning, pp. 7599–7609. PMLR, 2020.
  16. Face: Feasible and actionable counterfactual explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp.  344–350, 2020.
  17. Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. Advances in Neural Information Processing Systems, 33, 2020.
  18. Poison frogs! targeted clean-label poisoning attacks on neural networks. In Neural Information Processing Systems, 2018.
  19. Fawkes: Protecting privacy against unauthorized deep learning models. In 29th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 20), pp.  1589–1604, 2020.
  20. Prototypical networks for few-shot learning. In Neural Information Processing Systems, 2017.
  21. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp.  10–19, 2019.
  22. The philosophical basis of algorithmic recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, pp.  284–293, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450369367. doi: 10.1145/3351095.3372876. URL https://doi.org/10.1145/3351095.3372876.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube