Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
o3 Pro
5 tokens/sec
GPT-4.1 Pro
15 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

Zero-Regret Performative Prediction Under Inequality Constraints (2309.12618v1)

Published 22 Sep 2023 in cs.LG

Abstract: Performative prediction is a recently proposed framework where predictions guide decision-making and hence influence future data distributions. Such performative phenomena are ubiquitous in various areas, such as transportation, finance, public policy, and recommendation systems. To date, work on performative prediction has only focused on unconstrained scenarios, neglecting the fact that many real-world learning problems are subject to constraints. This paper bridges this gap by studying performative prediction under inequality constraints. Unlike most existing work that provides only performative stable points, we aim to find the optimal solutions. Anticipating performative gradients is a challenging task, due to the agnostic performative effect on data distributions. To address this issue, we first develop a robust primal-dual framework that requires only approximate gradients up to a certain accuracy, yet delivers the same order of performance as the stochastic primal-dual algorithm without performativity. Based on this framework, we then propose an adaptive primal-dual algorithm for location families. Our analysis demonstrates that the proposed adaptive primal-dual algorithm attains $\ca{O}(\sqrt{T})$ regret and constraint violations, using only $\sqrt{T} + 2T$ samples, where $T$ is the time horizon. To our best knowledge, this is the first study and analysis on the optimality of the performative prediction problem under inequality constraints. Finally, we validate the effectiveness of our algorithm and theoretical results through numerical simulations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Hussein A Abdou and John Pointon. 2011. Credit scoring, statistical techniques and evaluation criteria: a review of the literature. Intelligent systems in accounting, finance and management 18, 2-3 (2011), 59–88.
  2. Performative prediction in a stateful world. In International Conference on Artificial Intelligence and Statistics. PMLR, 6045–6061.
  3. Xuanyu Cao and Tamer Başar. 2020. Decentralized multi-agent stochastic optimization with pairwise constraints and quantized communications. IEEE Transactions on Signal Processing 68 (2020), 3296–3311.
  4. Xuanyu Cao and Tamer Başar. 2022. Distributed constrained online convex optimization over multiple access fading channels. IEEE Transactions on Signal Processing 70 (2022), 3468–3483.
  5. Teaching the old dog new tricks: Supervised learning with constraints. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 3742–3749.
  6. Strategic classification from revealed preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation. 55–70.
  7. Dmitriy Drusvyatskiy and Lin Xiao. 2022. Stochastic optimization with decision-dependent distributions. Mathematics of Operations Research (2022).
  8. Hans Föllmer and Alexander Schied. 2002. Convex measures of risk and trading constraints. Finance and stochastics 6 (2002), 429–447.
  9. A social-semantic recommender system for advertisements. Information Processing & Management 57, 2 (2020), 102153.
  10. Michael Grant and Stephen Boyd. 2014. CVX: Matlab software for disciplined convex programming, version 2.1.
  11. Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science. 111–122.
  12. Daniel P Heyman and Matthew J Sobel. 2004. Stochastic models in operations research: stochastic optimization. Vol. 2. Courier Corporation.
  13. How to learn when data reacts to your model: performative gradient descent. In International Conference on Machine Learning. PMLR, 4641–4650.
  14. Regret minimization with performative feedback. In International Conference on Machine Learning. PMLR, 9760–9785.
  15. Non-asymptotic analysis of biased stochastic approximation scheme. In Conference on Learning Theory. PMLR, 1944–1974.
  16. Susie Khamis. 2020. Branding diversity: New advertising and cultural strategies. Routledge.
  17. The sample average approximation method for stochastic discrete optimization. SIAM Journal on optimization 12, 2 (2002), 479–502.
  18. Qiang Li and Hoi-To Wai. 2022. State dependent performative prediction with stochastic approximation. In International Conference on Artificial Intelligence and Statistics. PMLR, 3164–3186.
  19. Multi-agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents. In Advances in Neural Information Processing Systems.
  20. Performative reinforcement learning. In International Conference on Machine Learning. PMLR, 23642–23680.
  21. Stochastic optimization for performative prediction. Advances in Neural Information Processing Systems 33 (2020), 4929–4939.
  22. David Metz. 2021. Time constraints and travel behaviour. Transportation planning and technology 44, 1 (2021), 16–29.
  23. Outside the echo chamber: Optimizing the performative risk. In International Conference on Machine Learning. PMLR, 7710–7720.
  24. A review of travel time estimation and forecasting for advanced traveller information systems. Transportmetrica A: Transport Science 11, 2 (2015), 119–157.
  25. Learning in Stochastic Monotone Games with Decision-Dependent Data. In International Conference on Artificial Intelligence and Statistics. PMLR, 5891–5912.
  26. Performative prediction. In International Conference on Machine Learning. PMLR, 7599–7609.
  27. Georgios Piliouras and Fang-Yi Yu. 2022. Multi-agent performative prediction: From global stability and optimality to chaos. arXiv preprint arXiv:2201.10483 (2022).
  28. Warren B Powell. 2019. A unified framework for stochastic optimization. European Journal of Operational Research 275, 3 (2019), 795–821.
  29. Dataset shift in machine learning. Mit Press.
  30. Decision-dependent risk minimization in geometrically decaying dynamic environments. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 8081–8088.
  31. Luciano Serafini and Artur d’Avila Garcez. 2016. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422 (2016).
  32. Stochastic primal-dual method for empirical risk minimization with 𝒪⁢(1)𝒪1\mathcal{O}(1)caligraphic_O ( 1 ) per-iteration complexity. Advances in Neural Information Processing Systems 31 (2018).
  33. Safe exploration for interactive machine learning. Advances in Neural Information Processing Systems 32 (2019).
  34. Online projected gradient descent for stochastic optimization with decision-dependent distributions. IEEE Control Systems Letters 6 (2021), 1646–1651.
  35. Killian Wood and Emiliano Dall’Anese. 2022a. Online Saddle Point Tracking with Decision-Dependent Data. arXiv preprint arXiv:2212.02693 (2022).
  36. Killian Wood and Emiliano Dall’Anese. 2022b. Stochastic saddle point problems with decision-dependent distributions. arXiv preprint arXiv:2201.02313 (2022).
  37. Knowledge enhanced hybrid neural network for text matching. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
  38. Stochastic Primal-Dual Algorithms with Faster Convergence than 𝒪⁢(1/T)𝒪1𝑇\mathcal{O}(1/\sqrt{T})caligraphic_O ( 1 / square-root start_ARG italic_T end_ARG ) for Problems without Bilinear Structure. arXiv preprint arXiv:1904.10112 (2019).
Citations (2)

Summary

We haven't generated a summary for this paper yet.