Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Supervised Training of Spaceborne Vision during Proximity Operations using Adaptive Kalman Filtering (2309.11645v2)

Published 20 Sep 2023 in cs.RO

Abstract: This work presents an Online Supervised Training (OST) method to enable robust vision-based navigation about a non-cooperative spacecraft. Spaceborne Neural Networks (NN) are susceptible to domain gap as they are primarily trained with synthetic images due to the inaccessibility of space. OST aims to close this gap by training a pose estimation NN online using incoming flight images during Rendezvous and Proximity Operations (RPO). The pseudo-labels are provided by adaptive unscented Kalman filter where the NN is used in the loop as a measurement module. Specifically, the filter tracks the target's relative orbital and attitude motion, and its accuracy is ensured by robust on-ground training of the NN using only synthetic data. The experiments on real hardware-in-the-loop trajectory images show that OST can improve the NN performance on the target image domain given that OST is performed on images of the target viewed from a diverse set of directions during RPO.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (90)
  1. B. B. Reed, R. C. Smith, B. J. Naasz, J. F. Pellegrino, and C. E. Bacon, “The Restore-L servicing mission,” AIAA Space 2016, 2016.
  2. J. L. Forshaw, G. S. Aglietti, N. Navarathinam, H. Kadhem, T. Salmon, A. Pisseloup, E. Joffre, T. Chabot, I. Retat, R. Axthelm, S. Barraclough, A. Ratcliffe, C. Bernal, F. Chaumette, A. Pollini, and W. H. Steyn, “RemoveDEBRIS: An in-orbit active debris removal demonstration mission,” Acta Astronautica, vol. 127, p. 448–463, 2016.
  3. T. H. Park, S. Sharma, and S. D’Amico, “Towards robust learning-based pose estimation of noncooperative spacecraft,” in 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, Maine, August 11-15 2019.
  4. M. Kisantal, S. Sharma, T. H. Park, D. Izzo, M. Märtens, and S. D’Amico, “Satellite pose estimation challenge: Dataset, competition design and results,” IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 5, pp. 4083–4098, 2020.
  5. S. Sharma and S. D’Amico, “Neural network-based pose estimation for noncooperative spacecraft rendezvous,” IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 6, pp. 4638–4658, 2020.
  6. B. Chen, J. Cao, Á. P. Bustos, and T.-J. Chin, “Satellite pose estimation with deep landmark regression and nonlinear pose refinement,” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 2816–2824, 2019.
  7. P. F. Proença and Y. Gao, “Deep learning for spacecraft pose estimation from photorealistic rendering,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 6007–6013.
  8. S. Kaki, J. Deutsch, K. Black, A. Cura-Portillo, B. A. Jones, and M. R. Akella, “Real-time image-based relative pose estimation and filtering for spacecraft applications,” Journal of Aerospace Information Systems, vol. 20, no. 6, pp. 290–307, 2023.
  9. T. H. Park and S. D’Amico, “Robust multi-task learning and online refinement for spacecraft pose estimation across domain gap,” Advances in Space Research, 2023.
  10. T. H. Park, M. Märtens, G. Lecuyer, D. Izzo, and S. D’Amico, “SPEED+: Next-generation dataset for spacecraft pose estimation across domain gap,” in 2022 IEEE Aerospace Conference (AERO), 2022, pp. 1–15.
  11. T. H. Park, J. Bosse, and S. D’Amico, “Robotic testbed for rendezvous and optical navigation: Multi-source calibration and machine learning use cases,” in 2021 AAS/AIAA Astrodynamics Specialist Conference, Big Sky, Virtual, August 9-11 2021.
  12. S. D’Amico, P. Bodin, M. Delpech, and R. Noteborn, “PRISMA,” in Distributed Space Missions for Earth System Monitoring Space Technology Library, M. D’Errico, Ed., 2013, vol. 31, ch. 21, pp. 599–637.
  13. E. Gill, S. D’Amico, and O. Montenbruck, “Autonomous formation flying for the prisma mission,” Journal of Spacecraft and Rockets, vol. 44, no. 3, pp. 671–681, 2007.
  14. A. Cropp and P. Palmer, “Pose estimation and relative orbit determination of a nearby target microsatellite using passive imager,” in 5th Cranfield Conference on Dynamics and Control of Systems and Structures in Space 2002, 2002, p. 389–395.
  15. M. R. Leinz, C.-T. Chen, M. W. Beaven, T. P. Weismuller, D. L. Caballero, W. B. Gaumer, P. W. Sabasteanski, P. A. Scott, and M. A. Lundgren, “Orbital Express Autonomous Rendezvous and Capture Sensor System (ARCSS) flight test results,” in Sensors and Systems for Space Applications II, R. T. Howard and P. Motaghedi, Eds., vol. 6958, International Society for Optics and Photonics.   SPIE, 2008, pp. 62 – 74.
  16. S. D’Amico, M. Benn, and J. L. Jørgensen, “Pose estimation of an uncooperative spacecraft from actual space imagery,” International Journal of Space Science and Engineering, vol. 2, no. 2, p. 171, 2014.
  17. A. A. Grompone, “Vision-based 3d motion estimation for on-orbit proximity satellite tracking and navigation,” 2015-06. [Online]. Available: https://hdl.handle.net/10945/45863
  18. S. Sharma, J. Ventura, and S. D’Amico, “Robust model-based monocular pose initialization for noncooperative spacecraft rendezvous,” Journal of Spacecraft and Rockets, p. 1–16, 2018.
  19. V. Capuano, S. R. Alimo, A. Q. Ho, and S.-J. Chung, “Robust features extraction for on-board monocular-based spacecraft pose acquisition,” in AIAA Scitech 2019 Forum.
  20. J. F. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 8, no. 6, pp. 679–698, Jun. 1986.
  21. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, p. 91–110, 2004.
  22. V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An accurate O(n) solution to the PnP problem,” International Journal of Computer Vision, vol. 81, no. 2, p. 155–166, 2008.
  23. S. Sharma and S. D’Amico, “Comparative assessment of techniques for initial pose estimation using monocular vision,” Acta Astronautica, vol. 123, pp. 435–445, 2016, special Section: Selected Papers from the International Workshop on Satellite Constellations and Formation Flying 2015.
  24. S. Sharma, C. Beierle, and S. D’Amico, “Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks,” in 2018 IEEE Aerospace Conference, 2018, pp. 1–12.
  25. L. P. Cassinis, R. Fonod, E. Gill, I. Ahrns, and J. G. Fernandez, “Cnn-based pose estimation system for close-proximity operations around uncooperative spacecraft,” in AIAA Scitech 2020 Forum, 2020.
  26. A. Garcia, M. A. Musallam, V. Gaudilliere, E. Ghorbel, K. Al Ismaeil, M. Perez, and D. Aouada, “Lspnet: A 2d localization-oriented spacecraft pose estimation neural network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2021, pp. 2048–2056.
  27. Y. Hu, S. Speierer, W. Jakob, P. Fua, and M. Salzmann, “Wide-depth-range 6d object pose estimation in space,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 15 865–15 874.
  28. M. A. Musallam, V. Gaudilliere, E. Ghorbel, K. A. Ismaeil, M. D. Perez, M. Poucet, and D. Aouada, “Spacecraft recognition leveraging knowledge of space environment: Simulator, dataset, competition design and analysis,” in 2021 IEEE International Conference on Image Processing Challenges (ICIPC), 2021, pp. 11–15.
  29. T. H. Park, M. Märtens, M. Jawaid, Z. Wang, B. Chen, T.-J. Chin, D. Izzo, and S. D’Amico, “Satellite pose estimation competition 2021: Results and analyses,” Acta Astronautica, vol. 204, pp. 640–665, 2023.
  30. X. Peng, B. Usman, N. Kaushik, D. Wang, J. Hoffman, and K. Saenko, “VisDA: A synthetic-to-real benchmark for visual domain adaptation,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018, pp. 2102–21 025.
  31. J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 23–30.
  32. S. Zakharov, W. Kehl, and S. Ilic, “Deceptionnet: Network-driven domain randomization,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 532–541.
  33. S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira, “Analysis of representations for domain adaptation,” in Advances in Neural Information Processing Systems, 2007.
  34. X. Peng, Z. Huang, X. Sun, and K. Saenko, “Domain agnostic learning with disentangled representations,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97.   PMLR, 09–15 Jun 2019, pp. 5102–5112. [Online]. Available: https://proceedings.mlr.press/v97/peng19b.html
  35. K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan, “Domain separation networks,” in Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds., vol. 29.   Curran Associates, Inc., 2016. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2016/file/45fbc6d3e05ebd93369ce542e8f2322d-Paper.pdf
  36. M. Jawaid, E. Elms, Y. Latif, and T.-J. Chin, “Towards bridging the space domain gap for satellite pose estimation using event sensing,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 11 866–11 873.
  37. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. March, and V. Lempitsky, “Domain-adversarial training of neural networks,” Journal of Machine Learning Research, vol. 17, no. 59, pp. 1–35, 2016. [Online]. Available: http://jmlr.org/papers/v17/15-239.html
  38. B. Sun and K. Saenko, “Deep coral: Correlation alignment for deep domain adaptation,” in Computer Vision – ECCV 2016 Workshops, G. Hua and H. Jégou, Eds.   Cham: Springer International Publishing, 2016, pp. 443–450.
  39. E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2962–2971.
  40. R. Li, Q. Jiao, W. Cao, H.-S. Wong, and S. Wu, “Model adaptation: Unsupervised domain adaptation without source data,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 9638–9647.
  41. V. K. Kurmi, V. K. Subramanian, and V. P. Namboodiri, “Domain impression: A source data free domain adaptation method,” in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 615–625.
  42. H.-W. Yeh, B. Yang, P. C. Yuen, and T. Harada, “Sofa: Source-data-free feature alignment for unsupervised domain adaptation,” in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 474–483.
  43. J. Liang, D. Hu, and J. Feng, “Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation,” in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119.   PMLR, 13–18 Jul 2020, pp. 6028–6039. [Online]. Available: https://proceedings.mlr.press/v119/liang20a.html
  44. D. Wang, E. Shelhamer, S. Liu, B. Olshausen, and T. Darrell, “TENT: Fully test-time adaptation by entropy minimization,” in International Conference on Learning Representations, 2021. [Online]. Available: https://openreview.net/forum?id=uXl3bZLkr3c
  45. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, F. Bach and D. Blei, Eds., vol. 37.   Lille, France: PMLR, 07–09 Jul 2015, pp. 448–456. [Online]. Available: https://proceedings.mlr.press/v37/ioffe15.html
  46. Y. Sun, X. Wang, Z. Liu, J. Miller, A. Efros, and M. Hardt, “Test-time training with self-supervision for generalization under distribution shifts,” in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119.   PMLR, 13–18 Jul 2020, pp. 9229–9248. [Online]. Available: https://proceedings.mlr.press/v119/sun20b.html
  47. Y. Liu, P. Kothari, B. van Delft, B. Bellot-Gurlet, T. Mordan, and A. Alahi, “TTT++: When does self-supervised test-time training fail or thrive?” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., vol. 34.   Curran Associates, Inc., 2021, pp. 21 808–21 820. [Online]. Available: https://proceedings.neurips.cc/paper/2021/file/b618c3210e934362ac261db280128c22-Paper.pdf
  48. S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting image rotations,” in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=S1v4N2l0-
  49. G. Larsson, M. Maire, and G. Shakhnarovich, “Learning representations for automatic colorization,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds.   Cham: Springer International Publishing, 2016, pp. 577–593.
  50. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. Daumé III and A. Singh, Eds., vol. 119.   PMLR, 13–18 Jul 2020, pp. 1597–1607. [Online]. Available: https://proceedings.mlr.press/v119/chen20j.html
  51. Z. Lu, Y. Zhang, K. Doherty, O. Severinsen, E. Yang, and J. Leonard, “Slam-supported self-training for 6d object pose estimation,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 2833–2840.
  52. J. Kruger and S. D’Amico, “Autonomous angles-only multitarget tracking for spacecraft swarms,” Acta Astronautica, vol. 189, pp. 514–529, 2021.
  53. L. Pasqualetto Cassinis, R. Fonod, E. Gill, I. Ahrns, and J. Gil-Fernández, “Evaluation of tightly- and loosely-coupled approaches in cnn-based pose estimation systems for uncooperative spacecraft,” Acta Astronautica, vol. 182, pp. 189–202, 2021.
  54. S. D’Amico, J.-S. Ardaens, G. Gaias, H. Benninghoff, B. Schlepp, and J. L. Jørgensen, “Noncooperative rendezvous using angles-only optical navigation: System design and flight results,” Journal of Guidance, Control, and Dynamics, vol. 36, no. 6, pp. 1576–1595, 2013.
  55. J. Sullivan, A. W. Koenig, J. Kruger, and S. D’Amico, “Generalized angles-only navigation architecture for autonomous distributed space systems,” Journal of Guidance, Control, and Dynamics, vol. 44, no. 6, pp. 1087–1105, 2021.
  56. J. Huang, Z. Zhu, F. Guo, and G. Huang, “The devil is in the details: Delving into unbiased data processing for human pose estimation,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 5699–5708.
  57. P. T. Jackson, A. Atapour-Abarghouei, S. Bonner, T. P. Breckon, and B. Obara, “Style augmentation: Data augmentation via style randomization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019.
  58. D. Hendrycks, S. Basart, N. Mu, S. Kadavath, F. Wang, E. Dorundo, R. Desai, T. Zhu, S. Parajuli, M. Guo, D. Song, J. Steinhardt, and J. Gilmer, “The many faces of robustness: A critical analysis of out-of-distribution generalization,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 8320–8329.
  59. Z. Xu, D. Liu, J. Yang, C. Raffel, and M. Niethammer, “Robust and generalizable visual representation learning via random convolutions,” in International Conference on Learning Representations, 2021. [Online]. Available: https://openreview.net/forum?id=BVSM0x3EDK6
  60. R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel, “Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.” in International Conference on Learning Representations, 2019. [Online]. Available: https://openreview.net/forum?id=Bygh9j09KX
  61. L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2414–2423.
  62. A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Information, vol. 11, no. 2, 2020.
  63. E. D. Cubuk, B. Zoph, J. Shlens, and Q. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33.   Curran Associates, Inc., 2020, pp. 18 613–18 624. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2020/file/d85b63ef0ccb114d0a3bb7b7d808028f-Paper.pdf
  64. T. H. Park and S. D’Amico, “Adaptive neural-network-based unscented kalman filter for robust pose tracking of noncooperative spacecraft,” Journal of Guidance, Control, and Dynamics, vol. 46, no. 9, pp. 1671–1688, 2023.
  65. A. W. Koenig, T. Guffanti, and S. D’Amico, “New state transition matrices for spacecraft relative motion in perturbed orbits,” Journal of Guidance, Control, and Dynamics, vol. 40, no. 7, pp. 1749–1768, 2017.
  66. J. L. Crassidis and F. L. Markley, “Unscented filtering for spacecraft attitude estimation,” Journal of Guidance, Control, and Dynamics, vol. 26, no. 4, pp. 536–542, 2003.
  67. T. Guffanti and S. D’Amico, “Linear models for spacecraft relative motion perturbed by solar radiation pressure,” Journal of Guidance, Control, and Dynamics, vol. 42, no. 9, pp. 1962–1981, 2019.
  68. B. E. Tweddle and A. Saenz-Otero, “Relative computer vision-based navigation for small inspection spacecraft,” Journal of Guidance, Control, and Dynamics, vol. 38, no. 5, pp. 969–978, 2015.
  69. N. Stacey and S. D’Amico, “Adaptive and dynamically constrained process noise estimation for orbit determination,” IEEE Transactions on Aerospace and Electronic Systems, vol. 57, no. 5, pp. 2920–2937, 2021.
  70. ——, “Analytical process noise covariance modeling for absolute and relative orbits,” Acta Astronautica, vol. 194, pp. 34–47, 2022.
  71. O. Montenbruck, M. Kirschner, S. D’Amico, and S. Bettadpur, “E/I-vector separation for safe switching of the GRACE formation,” Aerospace Science and Technology, vol. 10, no. 7, pp. 628–635, 2006.
  72. Y. Li, N. Wang, J. Shi, J. Liu, and X. Hou, “Revisiting batch normalization for practical domain adaptation,” 2017. [Online]. Available: https://openreview.net/forum?id=BJuysoFeg
  73. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2015.
  74. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9.
  75. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  76. M. Tan and Q. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97.   PMLR, 09–15 Jun 2019, pp. 6105–6114. [Online]. Available: https://proceedings.mlr.press/v97/tan19a.html
  77. A. Ma̧dry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=rJzIBfZAb
  78. D. Hendrycks and T. Dietterich, “Benchmarking neural network robustness to common corruptions and perturbations,” in International Conference on Learning Representations, 2019. [Online]. Available: https://openreview.net/forum?id=HJz6tiCqYm
  79. Y. Xu, J. Zhang, Q. ZHANG, and D. Tao, “ViTPose: Simple vision transformer baselines for human pose estimation,” in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35.   Curran Associates, Inc., 2022, pp. 38 571–38 584. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2022/file/fbb10d319d44f8c3b4720873e4177c65-Paper-Conference.pdf
  80. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2021. [Online]. Available: https://openreview.net/forum?id=YicbFdNTTy
  81. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
  82. R. Wightman, “Pytorch image models,” https://github.com/rwightman/pytorch-image-models, 2019.
  83. J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” 2016.
  84. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in International Conference on Learning Representations, 2019. [Online]. Available: https://openreview.net/forum?id=Bkg6RiCqY7
  85. ——, “SGDR: Stochastic gradient descent with warm restarts,” in International Conference on Learning Representations, 2017. [Online]. Available: https://openreview.net/forum?id=Skq89Scxx
  86. Lockheed Martin Corporation. (2020) Lockheed Martin And University Of Southern California Build Smart CubeSats, La Jument. [Online]. Available: https://news.lockheedmartin.com/news-releases?item=128962
  87. Aitech Systems Ltd. (2023) Aitech’s Space-rated AI Supercomputer On NASA Atmospheric Re-entry Demonstration. [Online]. Available: https://aitechsystems.com/aitechs-space-rated-ai-supercomputer-on-nasa-atmospheric-re-entry-demonstration/
  88. S. D’Amico, “Autonomous formation flying in low earth orbit,” Ph.D. dissertation, Technical University of Delft, 2010.
  89. M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and efficient object detection,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 10 778–10 787.
  90. Y. Wu and K. He, “Group normalization,” in Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds.   Cham: Springer International Publishing, 2018, pp. 3–19.
Citations (6)

Summary

We haven't generated a summary for this paper yet.