Bayesian Strategic Classification
Abstract: In strategic classification, agents modify their features, at a cost, to ideally obtain a positive classification from the learner's classifier. The typical response of the learner is to carefully modify their classifier to be robust to such strategic behavior. When reasoning about agent manipulations, most papers that study strategic classification rely on the following strong assumption: agents fully know the exact parameters of the deployed classifier by the learner. This often is an unrealistic assumption when using complex or proprietary machine learning techniques in real-world prediction tasks. We initiate the study of partial information release by the learner in strategic classification. We move away from the traditional assumption that agents have full knowledge of the classifier. Instead, we consider agents that have a common distributional prior on which classifier the learner is using. The learner in our model can reveal truthful, yet not necessarily complete, information about the deployed classifier to the agents. The learner's goal is to release just enough information about the classifier to maximize accuracy. We show how such partial information release can, counter-intuitively, benefit the learner's accuracy, despite increasing agents' abilities to manipulate. We show that while it is intractable to compute the best response of an agent in the general case, there exist oracle-efficient algorithms that can solve the best response of the agents when the learner's hypothesis class is the class of linear classifiers, or when the agents' cost function satisfies a natural notion of submodularity as we define. We then turn our attention to the learner's optimization problem and provide both positive and negative results on the algorithmic problem of how much information the learner should release about the classifier to maximize their expected accuracy.
- The strategic perceptron. In Proceedings of the 22nd ACM Conference on Economics and Computation. 6–25.
- On Classification of Strategic Agents Who Can Both Game and Improve. In Symposium on Foundations of Responsible Computing (FORC), Vol. 218. 3:1–3:22.
- Fundamental Bounds on Online Strategic Classification. In Proceedings of the 24th ACM Conference on Economics and Computation (EC). ACM, 22–58.
- Gaming helps! learning from strategic interactions in natural dynamics. In International Conference on Artificial Intelligence and Statistics (AISTATS). 1234–1242.
- Information discrepancy in strategic learning. In International Conference on Machine Learning (ICML). 1691–1715.
- Screening with Disadvantaged Agents. In 4th Symposium on Foundations of Responsible Computing (FORC) (Leibniz International Proceedings in Informatics (LIPIcs), Vol. 256). 6:1–6:20.
- Joseph R Biden. 2023. Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. (2023).
- Manipulation-proof machine learning. arXiv preprint arXiv:2004.03865 (2020).
- Mark Braverman and Sumegha Garg. 2020. The Role of Randomness and Noise in Strategic Classification. In Foundations of Responsible Computing (FORC) (LIPIcs, Vol. 156). 9:1–9:20.
- Michael Brückner and Tobias Scheffer. 2011. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. 547–555.
- Learning strategy-aware linear classifiers. Advances in Neural Information Processing Systems (NeurIPS) 33 (2020), 15265–15276.
- Strategyproof linear regression in high dimensions. In Proceedings of the 2018 ACM Conference on Economics and Computation. 9–26.
- Strategic Recourse in Linear Classification. arXiv preprint arXiv:2011.00355 (2020).
- Sequential strategic screening. In International Conference on Machine Learning. PMLR, 6279–6295.
- Truthful linear regression. In Conference on Learning Theory. PMLR, 448–483.
- The Causes and Consequences of Test Score Manipulation: Evidence from the New York Regents Examinations. American Economic Journal: Applied Economics 11, 3 (2019), 382–423.
- Incentive compatible regression learning. J. Comput. System Sci. 76, 8 (2010), 759–777.
- Strategic classification from revealed preferences. In Conference on Economics and Computation. 55–70.
- Marwa El Halabi and Stefanie Jegelka. 2020. Optimal approximation for unconstrained non-submodular minimization. In International Conference on Machine Learning. PMLR, 3961–3972.
- Strategic classification in the dark. In International Conference on Machine Learning. PMLR, 3672–3681.
- Maximizing Welfare with Incentive-Aware Evaluation Mechanisms. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020. 160–166.
- Calibrated Stackelberg Games: Learning Optimal Commitments Against Calibrated Agents. arXiv preprint arXiv:2306.02704 (2023).
- Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science. 111–122.
- Bayesian Persuasion for Algorithmic Recourse. arXiv:2112.06283
- Stateful strategic regression. Advances in Neural Information Processing Systems (NeurIPS) 34 (2021), 28728–28741.
- The disparate effects of strategic manipulation. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 259–268.
- Alternative Microfoundations for Strategic Classification. CoRR abs/2106.12705 (2021).
- Emir Kamenica and Matthew Gentzkow. 2011. Bayesian persuasion. American Economic Review 101, 6 (2011), 2590–2615.
- Optimal decision making under strategic behavior. arXiv preprint arXiv:1905.09239 (2019).
- Jon Kleinberg and Manish Raghavan. 2020. How do classifiers induce agents to invest effort strategically? ACM Transactions on Economics and Computation (TEAC) 8, 4 (2020), 1–23.
- The disparate equilibria of algorithmic decision making when individuals invest rationally. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 381–391.
- Tight bounds for strategyproof classification. In 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). 319–326.
- On the limits of dictatorial classification. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1. 609–616.
- Algorithms for strategyproof classification. Artificial Intelligence 186 (2012), 123–156.
- Strategic classification is causal modeling in disguise. In International Conference on Machine Learning. PMLR, 6917–6926.
- The social cost of strategic classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 230–239.
- Performative prediction. In International Conference on Machine Learning. PMLR, 7599–7609.
- Javier Perote and Juan Perote-Pena. 2004. Strategy-proof estimators for simple regression. Mathematical Social Sciences 47, 2 (2004), 153–176.
- General Data Protection Regulation. 2018. General data protection regulation (GDPR). Intersoft Consulting, Accessed in October 24, 1 (2018).
- Strategic Classification under Unknown Personalized Manipulation. arXiv preprint arXiv:2305.16501 (2023).
- Causal strategic linear regression. In International Conference on Machine Learning (ICML). 8676–8686.
- Linear models are robust optimal under strategic behavior. In International Conference on Artificial Intelligence and Statistics. PMLR, 2584–2592.
- Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 10–19.
- Andrew Chi-Chin Yao. 1977. Probabilistic computations: Toward a unified measure of complexity. In 18th Annual Symposium on Foundations of Computer Science (sfcs 1977). IEEE Computer Society, 222–227.
- Hanrui Zhang and Vincent Conitzer. 2021. Incentive-aware PAC learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 5797–5804.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.