Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Best Decisions Are Not the Best Advice: Making Adherence-Aware Recommendations (2209.01874v4)

Published 5 Sep 2022 in cs.HC and cs.AI

Abstract: Many high-stake decisions follow an expert-in-loop structure in that a human operator receives recommendations from an algorithm but is the ultimate decision maker. Hence, the algorithm's recommendation may differ from the actual decision implemented in practice. However, most algorithmic recommendations are obtained by solving an optimization problem that assumes recommendations will be perfectly implemented. We propose an adherence-aware optimization framework to capture the dichotomy between the recommended and the implemented policy and analyze the impact of partial adherence on the optimal recommendation. We show that overlooking the partial adherence phenomenon, as is currently being done by most recommendation engines, can lead to arbitrarily severe performance deterioration, compared with both the current human baseline performance and what is expected by the recommendation algorithm. Our framework also provides useful tools to analyze the structure and to compute optimal recommendation policies that are naturally immune against such human deviations, and are guaranteed to improve upon the baseline policy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Near-optimal regret bounds for reinforcement learning. Advances in Neural Information Processing Systems, 21, 2008.
  2. Improving human decision-making with machine learning. arXiv preprint arXiv:2108.08454, 2021.
  3. Robust Optimization, volume 28. Princeton university press, 2009.
  4. The price of robustness. Operations Research, 52(1):35–53, 2004.
  5. Nonconvex robust optimization for problems with constraints. INFORMS Journal on Computing, 22(1):44–58, 2010.
  6. Fairness, efficiency, and flexibility in organ allocation for kidney transplantation. Operations Research, 61(1):73–87, 2013.
  7. Predicting inpatient flow at a major hospital using interpretable analytics. Manufacturing & Service Operations Management, 24(6):2809–2824, 2022.
  8. Stochastic multi-armed-bandit problem with non-stationary rewards. Advances in Neural Information Processing Systems, 27, 2014.
  9. Non-stationary stochastic optimization. Operations Research, 63(5):1227–1244, 2015.
  10. A markov chain approximation to choice modeling. Operations Research, 64(4):886–905, 2016.
  11. Human and machine: The impact of machine input on decision-making under cognitive limitations. Management Science, 2023.
  12. Mining optimal policies: A pattern recognition approach to model analysis. INFORMS Journal on Optimization, 2(3):145–166, 2020.
  13. Believing in analytics: Managers’ adherence to price recommendations from a DSS. Manufacturing & Service Operations Management, 2023.
  14. Non-stationary reinforcement learning: The blessing of (more) optimism. Management Science, 2023.
  15. Interpretable optimal stopping. Management Science, 68(3):1616–1638, 2022.
  16. Is your machine better than you? you may never know. Management Science, 2023.
  17. Eric Delage and S. Mannor. Percentile optimization for Markov decision processes with parameter uncertainty. Operations Research, 58(1):203 – 213, 2010.
  18. Cyrus Derman. Finite State Markovian Decision Processes. Academic Press, Inc., 1970.
  19. Constrained assortment optimization under the markov chain–based choice model. Management Science, 66(2):698–721, 2020.
  20. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1):114, 2015.
  21. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3):1155–1170, 2018.
  22. Handbook of Markov Decision Processes: Methods and Applications, volume 40. Springer Science & Business Media, 2012.
  23. Effective forecasting and judgmental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning. International Journal of Forecasting, 25(1):3–23, 2009.
  24. On upper-confidence bound policies for switching bandit problems. In International Conference on Algorithmic Learning Theory, pages 174–188. Springer, 2011.
  25. Data uncertainty in Markov chains: Application to cost-effectiveness analyses of medical innovations. Operations Research, 66(3):697–715, 2018.
  26. European union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3):50–57, 2017.
  27. Robust Markov decision processes: Beyond rectangularity. Mathematics of Operations Research, 2022.
  28. Robust policies for proactive ICU transfers. arXiv preprint arXiv:2002.06247, 2020.
  29. Discretionary task ordering: Queue management in radiological services. Management Science, 64(9):4389–4407, 2018.
  30. Garud N Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257–280, 2005.
  31. Lazy-MDPs: Towards interpretable reinforcement learning by learning when to act. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, pages 669–677, 2022.
  32. Nathan Kallus. Recursive partitioning for personalization using observational data. In International Conference on Machine Learning, pages 1789–1798. PMLR, 2017.
  33. Field experiment on the profit implications of merchants’ discretionary power to override data-driven decision-making tools. Management Science, 66(11):5182–5190, 2020.
  34. The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nature Medicine, 24(11):1716–1720, 2018.
  35. Demand forecasting behavior: System neglect and change detection. Management Science, 57(10):1827–1843, 2011.
  36. Does algorithm aversion exist in the field? An empirical analysis of algorithm use determinants in diabetes self-management. SSRN (July 23, 2021), 2021.
  37. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151:90–103, 2019.
  38. Fabrication-adaptive optimization with an application to photonic crystal design. Operations Research, 62(2):418–434, 2014.
  39. Learning to switch between machines and humans. arXiv preprint arXiv:2002.04258, 2020.
  40. Martin L Puterman. Markov Decision Processes : Discrete Stochastic Dynamic Programming. John Wiley and Sons, 2014.
  41. Eduardo Sabaté. Adherence to Long-term Therapies: Evidence for Action. World Health Organization, 2003.
  42. Exploration conscious reinforcement learning revisited. In International Conference on Machine Learning, pages 5680–5689. PMLR, 2019.
  43. Markov decision processes for screening and treatment of chronic diseases. In Markov Decision Processes in Practice, pages 189–222. Springer, 2017.
  44. Predicting human discretion to adjust algorithmic prescription: A large-scale field experiment in warehouse operations. Management Science, 68(2):846–865, 2022.
  45. Ordering behavior in retail stores and implications for automated replenishment. Management Science, 56(5):766–784, 2010.
  46. Robust Markov decision processes. Mathematics of Operations Research, 38(1):153–183, 2013.
Citations (11)

Summary

We haven't generated a summary for this paper yet.