Recursive Distillation for Open-Set Distributed Robot Localization (2312.15897v2)
Abstract: A typical assumption in state-of-the-art self-localization models is that an annotated training dataset is available for the target workspace. However, this is not necessarily true when a robot travels around the general open world. This work introduces a novel training scheme for open-world distributed robot systems. In our scheme, a robot (student") can ask the other robots it meets at unfamiliar places (teachers") for guidance. Specifically, a pseudo-training dataset is reconstructed from the teacher model and then used for continual learning of the student model under domain, class, and vocabulary incremental setup. Unlike typical knowledge transfer schemes, our scheme introduces only minimal assumptions on the teacher model, so that it can handle various types of open-set teachers, including those uncooperative, untrainable (e.g., image retrieval engines), or black-box teachers (i.e., data privacy). In this paper, we investigate a ranking function as an instance of such generic models, using a challenging data-free recursive distillation scenario, where a student once trained can recursively join the next-generation open teacher set.
- E. Garcia-Fidalgo and A. Ortiz, “ibow-lcd: An appearance-based loop-closure detection approach using incremental bags of binary words,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3051–3057, 2018.
- T. Weyand, I. Kostrikov, and J. Philbin, “Planet - photo geolocation with convolutional neural networks,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 37–55.
- Y. Lu, L. Luo, D. Huang, Y. Wang, and L. Chen, “Knowledge transfer in vision recognition: A survey,” ACM Computing Surveys (CSUR), vol. 53, no. 2, pp. 1–35, 2020.
- N. Yang, K. Tanaka, Y. Fang, X. Fei, K. Inagami, and Y. Ishikawa, “Long-term vehicle localization using compressed visual experiences,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 2018, pp. 2203–2208.
- M. De Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, “A continual learning survey: Defying forgetting in classification tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3366–3385, 2022.
- D. L. Silver and R. E. Mercer, “The task rehearsal method of life-long learning: Overcoming impoverished data,” in Advances in Artificial Intelligence, R. Cohen and B. Spencer, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002, pp. 90–101.
- D. Rolnick, A. Ahuja, J. Schwarz, T. Lillicrap, and G. Wayne, “Experience replay for continual learning,” in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019.
- A. Mallya and S. Lazebnik, “Packnet: Adding multiple tasks to a single network by iterative pruning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
- Y. Liu, W. Zhang, J. Wang, and J. Wang, “Data-free knowledge transfer: A survey,” CoRR, vol. abs/2112.15278, 2021. [Online]. Available: https://arxiv.org/abs/2112.15278
- N. Haim, G. Vardi, G. Yehudai, O. Shamir, and M. Irani, “Reconstructing training data from trained neural networks,” in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35. Curran Associates, Inc., 2022, pp. 22 911–22 924. [Online]. Available: https://proceedings.neurips.cc/paper˙files/paper/2022/file/906927370cbeb537781100623cca6fa6-Paper-Conference.pdf
- G. K. Nayak, K. R. Mopuri, V. Shaj, V. B. Radhakrishnan, and A. Chakraborty, “Zero-shot knowledge distillation in deep networks,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. PMLR, 09–15 Jun 2019, pp. 4743–4751. [Online]. Available: https://proceedings.mlr.press/v97/nayak19a.html
- Z. Luo, J.-T. Hsieh, L. Jiang, J. C. Niebles, and L. Fei-Fei, “Graph distillation for action detection with privileged modalities,” in Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
- T. Hiroki and K. Tanaka, “Long-term knowledge distillation of visual place classifiers,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 541–546.
- N. Carlevaris-Bianco, A. K. Ushani, and R. M. Eustice, “University of michigan north campus long-term vision and lidar dataset,” The International Journal of Robotics Research, vol. 35, no. 9, pp. 1023–1035, 2016.
- K. Tanaka, “Cross-season place recognition using nbnn scene descriptor,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 729–735.
- H. Chen, Y. Wang, C. Xu, Z. Yang, C. Liu, B. Shi, C. Xu, C. Xu, and Q. Tian, “Data-free learning of student networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
- T. Ohta, K. Tanaka, and R. Yamamoto, “Scene graph descriptors for visual place classification from noisy scene data,” ICT Express, 2023.
- K. Tanaka, “Self-supervised map-segmentation by mining minimal-map-segments,” in 2020 IEEE Intelligent Vehicles Symposium (IV), 2020, pp. 637–644.
- M. Yoshida, R. Yamamoto, and K. Tanaka, “Highly compressive visual self-localization using sequential semantic scene graph and graph convolutional neural network,” 2022.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.