Papers
Topics
Authors
Recent
2000 character limit reached

Feature-Based Interpretable Surrogates for Optimization (2409.01869v2)

Published 3 Sep 2024 in math.OC and cs.LG

Abstract: For optimization models to be used in practice, it is crucial that users trust the results. A key factor in this aspect is the interpretability of the solution process. A previous framework for inherently interpretable optimization models used decision trees to map instances to solutions of the underlying optimization model. Based on this work, we investigate how we can use more general optimization rules to further increase interpretability and, at the same time, give more freedom to the decision-maker. The proposed rules do not map to a concrete solution but to a set of solutions characterized by common features. To find such optimization rules, we present an exact methodology using mixed-integer programming formulations as well as heuristics. We also outline the challenges and opportunities that these methods present. In particular, we demonstrate the improvement in solution quality that our approach offers compared to existing interpretable surrogates for optimization, and we discuss the relationship between interpretability and performance. These findings are supported by experiments using both synthetic and real-world data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (67)
  1. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access, 6:52138–52160, 2018.
  2. The differential use and effect of knowledge-based system explanations in novice and expert judgment decisions. MIS Quarterly: Management Information Systems, pages 79–97, 2006.
  3. A framework for data-driven explainability in mathematical optimization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38(19), pages 20912–20920, 2024.
  4. Vadim Arzamasov. Comprehensible and Robust Knowledge Discovery from Small Datasets. PhD thesis, Karlsruher Institut für Technologie (KIT), 2021.
  5. Explainable predict-and-optimize. Preprint available online, 2023.
  6. Optimal prescriptive trees. INFORMS Journal on Optimization, 1(2):164–183, 2019.
  7. A classification of hyper-heuristic approaches: revisited. In Handbook of metaheuristics, pages 453–477. Springer, 2019.
  8. Automatic heuristic generation with genetic programming: evolving a jack-of-all-trades or a master of one. In Proceedings of the 9th annual conference on Genetic and evolutionary computation, pages 1559–1565, 2007.
  9. Hyper-heuristic evolution of dispatching rules: A comparison of rule representations. Evolutionary computation, 23(2):249–277, 2015.
  10. Min–max–min robust combinatorial optimization. Mathematical Programming, 163:1–23, 2017.
  11. Complexity of min–max–min robustness for combinatorial optimization under discrete uncertainty. Discrete Optimization, 28:1–15, 2018.
  12. From predictive to prescriptive analytics. Management Science, 66(3):1025–1044, 2020.
  13. Interpretable clustering: an optimization approach. Machine Learning, 110(1):89–138, 2021.
  14. How to explain individual classification decisions. The Journal of Machine Learning Research, 11:1803–1831, 2010.
  15. Algorithms and uncertainty sets for data-driven robust shortest path problems. European Journal of Operational Research, 274(2):671–686, 2019.
  16. Explainable interactive evolutionary multiobjective optimization. Omega, 122:102925, 2024.
  17. Schedule explainer: An argumentation-supported tool for interactive explanations in makespan scheduling. In International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, pages 243–259. Springer, 2021.
  18. Argumentation for explainable scheduling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2752–2759, 2019.
  19. Interpretable optimal stopping. Management Science, 68(3):1616–1638, 2022.
  20. Towards an argumentation-based approach to explainable planning. In ICAPS 2019 Workshop XAIP Program Chairs, 2019.
  21. Plan explanations as model reconciliation–an empirical study. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 258–266. IEEE, 2019.
  22. Explainable AI for operational research: A defining framework, methods, applications, and a research agenda. European Journal of Operational Research, 317:249–272, 2024.
  23. Recent advances in selection hyper-heuristics. European Journal of Operational Research, 285(2):405–428, 2020.
  24. Smart “predict, then optimize”. Management Science, 68(1):9–26, 2022.
  25. Explainable dynamic programming. Journal of Functional Programming, 31, 2021.
  26. Effective and interpretable dispatching rules for dynamic job shops via guided empirical learning. Omega, 111:102643, 2022.
  27. Explainable data-driven optimization: from context to decision and back again. In International Conference on Machine Learning, pages 10170–10187. PMLR, 2023.
  28. Alex A Freitas. Comprehensible classification models: a position paper. ACM SIGKDD Explorations Newsletter, 15(1):1–10, 2014.
  29. European Union regulations on algorithmic decision-making and a ”right to explanation”. AI Magazine, 38(3):50–57, 2017.
  30. A framework for inherently interpretable optimization models. European Journal of Operational Research, 310(3):1312–1324, 2023.
  31. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, 1979.
  32. The decision rule approach to optimization under uncertainty: methodology and applications. Computational Management Science, 16(4):545–576, 2019.
  33. Michael Gleicher. A framework for considering comprehensibility in modeling. Big Data, 4(2):75–88, 2016.
  34. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1):141–154, 2011.
  35. Rachel Hunt. Genetic programming hyper-heuristics for job shop scheduling. PhD thesis, Victoria University of Wellington, 2016.
  36. Learning optimal prescriptive trees from observational data. arXiv preprint arXiv:2108.13628, 2021.
  37. Counterfactual explanations for linear optimization. arXiv preprint arXiv:2405.15431, 2024.
  38. How incorporating feedback mechanisms in a DSS affects DSS evaluations. Information Systems Research, 20(4):527–546, 2009.
  39. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artificial Intelligence in Medicine, 94:42–53, 2019.
  40. Towards explainable interactive multiobjective optimization: R-XIMO. Autonomous Agents and Multi-Agent Systems, 36(2):43, 2022.
  41. The use of explanations in knowledge-based systems: Cognitive perspectives and a process-tracing analysis. Journal of Management Information Systems, 17(2):153–179, 2000.
  42. Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1–38, 2019.
  43. Giovanni Misitano. Exploring the explainable aspects and performance of a learnable evolutionary multiobjective optimization method. ACM Transactions on Evolutionary Learning and Optimization, 4(1):1–39, 2024.
  44. The effects of authentic leadership on strategic internal communication and employee-organization relationships. Journal of Public Relations Research, 26(4):301–324, 2014.
  45. Su Nguyen and Mengjie Zhang. A pso-based hyper-heuristic for evolving dispatching rules in job shop scheduling. In 2017 IEEE congress on evolutionary computation (CEC), pages 882–889. IEEE, 2017.
  46. A computational study of representations in genetic programming to evolve dispatching rules for the job shop scheduling problem. IEEE Transactions on Evolutionary Computation, 17(5):621–639, 2012.
  47. Argument-based plan explanation. In Knowledge Engineering Tools and Techniques for AI Planning, pages 173–188. Springer, 2020.
  48. Andrea Prat. The wrong kind of transparency. American Economic Review, 95(3):862–877, 2005.
  49. Evaluating simulation-derived scenarios for effective decision support. Technological Forecasting and Social Change, 91:64–77, 2015.
  50. Brad L Rawlins. Measuring the relationship between organizational transparency and employee trust. Public Relations Journal, 2(2), 2008.
  51. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019.
  52. Explanation capabilities of production-based consultation systems. American Journal of Computational Linguistics, 62, 1977.
  53. K-adaptability in two-stage mixed-integer robust optimization. Mathematical Programming Computation, 12(2):193–224, 2020.
  54. Neurolinear: From neural networks to oblique decision rules. Neurocomputing, 17(1):1–24, 1997.
  55. What do I do in a world of artificial intelligence? Investigating the impact of substitutive decision-making AI systems on employees’ professional role identity. Journal of the Association for Information Systems, 22(2):9, 2021.
  56. The impact of employee communication and perceived external prestige on organizational identification. Academy of Management Journal, 44(5):1051–1062, 2001.
  57. Toward explainable multi-objective probabilistic planning. In 2018 IEEE/ACM 4th International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS), pages 19–25. IEEE, 2018.
  58. William R Swartout. Explaining and justifying expert consulting programs. In Computer-Assisted Medical Decision Making, pages 254–271. Springer, 1985.
  59. An explainable AI decision-support-system to automate loan underwriting. Expert Systems with Applications, 144:113100, 2020.
  60. Explaining solutions to multi-stage stochastic optimization problems to decision makers. In 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), pages 1–4. IEEE, 2022.
  61. Supersparse linear integer models for optimized medical scoring systems. Machine Learning, 102(3):349–391, 2016.
  62. Robust optimization with decision-dependent information discovery. arXiv preprint arXiv:2004.08490, 2020.
  63. Novel ensemble genetic programming hyper-heuristics for uncertain capacitated arc routing problem. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 1093–1101, 2019.
  64. Effects of recommendation neutrality and sponsorship disclosure on trust vs. distrust in online recommendation agents: Moderating role of explanations for organic recommendations. Management Science, 64(11):5198–5219, 2018.
  65. The impact of explanation facilities on user acceptance of expert systems advice. MIS Quarterly: Management Information Systems, 19(2):157–172, 1995.
  66. Bridging transformational leadership, transparent communication, and employee openness to change: The mediating role of trust. Public Relations Review, 45(3):101779, 2019.
  67. A deep reinforcement learning based hyper-heuristic for combinatorial optimisation with uncertainties. European Journal of Operational Research, 300(2):418–427, 2022.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.