Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Choosing a Classical Planner with Graph Neural Networks (2402.04874v1)

Published 25 Jan 2024 in cs.AI and cs.LG

Abstract: Online planner selection is the task of choosing a solver out of a predefined set for a given planning problem. As planning is computationally hard, the performance of solvers varies greatly on planning problems. Thus, the ability to predict their performance on a given problem is of great importance. While a variety of learning methods have been employed, for classical cost-optimal planning the prevailing approach uses Graph Neural Networks (GNNs). In this work, we continue the line of work on using GNNs for online planner selection. We perform a thorough investigation of the impact of the chosen GNN model, graph representation and node features, as well as prediction task. Going further, we propose using the graph representation obtained by a GNN as an input to the Extreme Gradient Boosting (XGBoost) model, resulting in a more resource-efficient yet accurate approach. We show the effectiveness of a variety of GNN-based online planner selection methods, opening up new exciting avenues for research on online planner selection.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Complexity results for sas+ planning. Computational Intelligence, 11(4):625–655, 1995.
  2. Tom Bylander. The computational complexity of propositional strips planning. Artificial Intelligence, 69(1-2):165–204, 1994.
  3. Learning predictive models to configure planning portfolios. In Proceedings of the 4th workshop on Planning and Learning (ICAPS-PAL 2013), pages 14–22. Citeseer, 2013.
  4. The ibacop planning system: Instance-based configured portfolios. Journal of Artificial Intelligence Research, 56:657–691, 2016.
  5. From robotic process automation to intelligent process automation: –emerging trends–. In Business Process Management: Blockchain and Robotic Process Automation Forum: BPM 2020 Blockchain and RPA Forum, Seville, Spain, September 13–18, 2020, Proceedings 18, pages 215–228. Springer, 2020.
  6. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785–794, 2016.
  7. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, 2014.
  8. Explainable planner selection for classical planning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 9741–9749, 2022.
  9. IPC: A benchmark data set for learning with graph-structured data. In ICML 2019 Workshop on Learning and Reasoning with Graph-Structured Data, 2019.
  10. Automated Planning: theory and practice. Elsevier, 2004.
  11. William L Hamilton. Graph representation learning. Morgan & Claypool Publishers, 2020.
  12. Fast downward stone soup: A baseline for building planner portfolios. In ICAPS 2011 Workshop on Planning and Learning, volume 2835, 2011.
  13. Automated planning for robotics. Annual Review of Control, Robotics, and Autonomous Systems, 3:417–439, 2020.
  14. Delfi: Online planner selection for cost-optimal planning. IPC-9 planner abstracts, pages 57–64, 2018.
  15. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
  16. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2016.
  17. AA Leman and Boris Weisfeiler. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsiya, 2(9):12–16, 1968.
  18. Gated graph sequence neural networks. In Proceedings of ICLR’16, 2016.
  19. Online planner selection with graph neural networks and adaptive scheduling. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020.
  20. Drew M McDermott. The 1998 ai planning systems competition. AI magazine, 21(2):35–35, 2000.
  21. A modern approach. Prentice Hall Upper Saddle River, NJ, USA: Rani, M., Nayak, R., & Vyas, OP (2015). An ontology-based adaptive personalized e-learning system, assisted by software agents on cloud storage. Knowledge-Based Systems, 90:33–48, 2002.
  22. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
  23. Exploiting problem symmetries in state-based planners. In Proceedings of the AAAI conference on artificial intelligence, volume 25, pages 1004–1009, 2011.
  24. Improved domain modeling for realistic automated planning and scheduling in discrete manufacturing. In 2018 IEEE 23rd international conference on emerging technologies and factory automation (ETFA), volume 1, pages 464–471. IEEE, 2018.
  25. Incremental topology transformation for publish/subscribe systems using integer programming. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pages 80–91. IEEE, 2017.
  26. Learning portfolios of automatically tuned planners. In Proceedings of the International Conference on Automated Planning and Scheduling, volume 22, pages 368–372, 2012.
  27. Automatic configuration of sequential planning portfolios. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015.
  28. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
  29. Structural symmetries of the lifted representation of classical planning tasks. In ICAPS 2017 Workshop on Heuristics and Search for Domain-independent Planning, pages 67–74, 2017.
  30. Deep learning for cost-optimal planning: Task-dependent planner selection. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019.
  31. Mauro Vallati. A guide to portfolio-based planning. In International Workshop on Multi-disciplinary Trends in Artificial Intelligence, pages 57–68. Springer, 2012.
  32. Graph attention networks. In International Conference on Learning Representations, 2018.
  33. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019.
  34. How powerful are graph neural networks? In International Conference on Learning Representations, 2018.
  35. Planning the transformation of overlays. In Proceedings of the 31st Annual ACM Symposium on Applied Computing, pages 500–507, 2016.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets