Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unified Task and Motion Planning using Object-centric Abstractions of Motion Constraints (2312.17605v1)

Published 29 Dec 2023 in cs.RO and cs.AI

Abstract: In task and motion planning (TAMP), the ambiguity and underdetermination of abstract descriptions used by task planning methods make it difficult to characterize physical constraints needed to successfully execute a task. The usual approach is to overlook such constraints at task planning level and to implement expensive sub-symbolic geometric reasoning techniques that perform multiple calls on unfeasible actions, plan corrections, and re-planning until a feasible solution is found. We propose an alternative TAMP approach that unifies task and motion planning into a single heuristic search. Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI heuristic search to yield physically feasible plans. These plans can be directly transformed into object and motion parameters for task execution without the need of intensive sub-symbolic geometric reasoning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. N. T. Dantam, Z. K. Kingston, S. Chaudhuri, and L. E. Kavraki, “An incremental constraint-based framework for task and motion planning,” The International Journal of Robotics Research, vol. 37, no. 10, pp. 1134–1151, 2018.
  2. C. R. Garrett, T. Lozano-Pérez, and L. P. Kaelbling, “Pddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning,” in Proceedings of the International Conference on Automated Planning and Scheduling, vol. 30, 2020, pp. 440–448.
  3. C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-Pérez, “Integrated task and motion planning,” Annual review of control, robotics, and autonomous systems, vol. 4, pp. 265–293, 2021.
  4. L. P. Kaelbling and T. Lozano-Pérez, “Hierarchical task and motion planning in the now,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 1470–1477.
  5. R. Lallement, L. De Silva, and R. Alami, “Hatp: An htn planner for robotics,” arXiv preprint arXiv:1405.5345, 2014.
  6. M. Colledanchise and P. Ögren, “How behavior trees modularize hybrid control systems and generalize sequential behavior compositions, the subsumption architecture, and decision trees,” IEEE Transactions on Robotics, vol. 33, no. 2, pp. 372–389, 2017.
  7. A. M. Wells, N. T. Dantam, A. Shrivastava, and L. E. Kavraki, “Learning feasibility for task and motion planning in tabletop environments,” IEEE robotics and automation letters, vol. 4, no. 2, pp. 1255–1262, 2019.
  8. J. Bidot, L. Karlsson, F. Lagriffoul, and A. Saffiotti, “Geometric backtracking for combined task and motion planning in robotic systems,” Artificial Intelligence, vol. 247, pp. 229–265, 2017.
  9. F. Lagriffoul and B. Andres, “Combining task and motion planning: A culprit detection problem,” The International Journal of Robotics Research, vol. 35, no. 8, pp. 890–927, 2016.
  10. M. Toussaint, “Logic-geometric programming: An optimization-based approach to combined task and motion planning.” in Proc. of the International Joint Conference on Artificial Intelligence (IJCAI), 2015, pp. 1930–1936.
  11. E. Fernandez-Gonzalez, E. Karpas, and B. Williams, “Mixed discrete-continuous planning with convex optimization,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
  12. N. Castaman, E. Pagello, E. Menegatti, and A. Pretto, “Receding horizon task and motion planning in changing environments,” Robotics and Autonomous Systems, vol. 145, p. 103863, 2021.
  13. O. Kroemer, S. Niekum, and G. Konidaris, “A review of robot learning for manipulation: Challenges, representations, and algorithms,” arXiv preprint arXiv:1907.03146, 2019.
  14. C. Devin, P. Abbeel, T. Darrell, and S. Levine, “Deep object-centric representations for generalizable robot learning,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 7111–7118.
  15. J. Wang, C. Hu, Y. Wang, and Y. Zhu, “Dynamics learning with object-centric interaction networks for robot manipulation,” IEEE Access, vol. 9, pp. 68 277–68 288, 2021.
  16. R. Veerapaneni, J. D. Co-Reyes, M. Chang, M. Janner, C. Finn, J. Wu, J. B. Tenenbaum, and S. Levine, “Entity abstraction in visual model-based reinforcement learning,” arXiv preprint arXiv:1910.12827, 2019.
  17. J. E. King, M. Cognetti, and S. S. Srinivasa, “Rearrangement planning using object-centric and robot-centric action spaces,” in 2016 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2016, pp. 3940–3947.
  18. A. Agostini and D. Lee, “Efficient State Abstraction using Object-centered Predicates for Manipulation Planning,” arXiv preprint arXiv:2007.08251, 2020.
  19. A. Agostini, M. Saveriano, D. Lee, and J. Piater, “Manipulation planning using object-centered predicates and hierarchical decomposition of contextual actions,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5629–5636, 2020.
  20. D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso, D. Weld, and D. Wilkins, “Pddl-the planning domain definition language,” CVC TR-98-003/DCS TR-1165, Yale Center for Computational Vision and Control, Tech. Rep., 1998.
  21. C. R. Garrett, T. Lozano-Perez, and L. P. Kaelbling, “Ffrob: Leveraging symbolic planning for efficient task and motion planning,” The International Journal of Robotics Research, vol. 37, no. 1, pp. 104–136, 2018.
  22. M. Helmert, “The fast downward planning system.” J. Artif. Intell. Res.(JAIR), vol. 26, pp. 191–246, 2006.
Citations (3)

Summary

We haven't generated a summary for this paper yet.