Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Integrating One-Shot View Planning with a Single Next-Best View via Long-Tail Multiview Sampling (2304.00910v4)

Published 3 Apr 2023 in cs.RO

Abstract: Existing view planning systems either adopt an iterative paradigm using next-best views (NBV) or a one-shot pipeline relying on the set-covering view-planning (SCVP) network. However, neither of these methods can concurrently guarantee both high-quality and high-efficiency reconstruction of 3D unknown objects. To tackle this challenge, we introduce a crucial hypothesis: with the availability of more information about the unknown object, the prediction quality of the SCVP network improves. There are two ways to provide extra information: (1) leveraging perception data obtained from NBVs, and (2) training on an expanded dataset of multiview inputs. In this work, we introduce a novel combined pipeline that incorporates a single NBV before activating the proposed multiview-activated (MA-)SCVP network. The MA-SCVP is trained on a multiview dataset generated by our long-tail sampling method, which addresses the issue of unbalanced multiview inputs and enhances the network performance. Extensive simulated experiments substantiate that our system demonstrates a significant surface coverage increase and a substantial 45% reduction in movement cost compared to state-of-the-art systems. Real-world experiments justify the capability of our system for generalization and deployment.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (84)
  1. T. Zaenker, C. Smitt, C. McCool, and M. Bennewitz, “Viewpoint planning for fruit size and position estimation,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 3271–3277.
  2. N. Dengler, S. Pan, V. Kalagaturu, R. Menon, M. Dawood, and M. Bennewitz, “Viewpoint push planning for mapping of unknown confined spaces,” 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
  3. M. Breyer, L. Ott, R. Siegwart, and J. J. Chung, “Closed-loop next-best-view planning for target-driven grasping,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 1411–1416.
  4. S. Chen, Y. Li, and N. M. Kwok, “Active vision in robotic systems: A survey of recent developments,” The International Journal of Robotics Research, vol. 30, no. 11, pp. 1343–1377, 2011.
  5. R. Zeng, Y. Wen, W. Zhao, and Y.-J. Liu, “View planning in robot active vision: A survey of systems, algorithms, and applications,” Computational Visual Media, pp. 1–21, 2020.
  6. S. Pan, H. Hu, and H. Wei, “Scvp: Learning one-shot view planning via set covering for unknown object reconstruction,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1463–1470, 2022.
  7. J. Delmerico, S. Isler, R. Sabzevari, and D. Scaramuzza, “A comparison of volumetric information gain metrics for active 3d object reconstruction,” Autonomous Robots, vol. 42, no. 2, pp. 197–208, 2018.
  8. R. Zeng, W. Zhao, and Y.-J. Liu, “Pc-nbv: A point cloud based deep network for efficient next best view planning,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   Las Vegas, NV, USA: IEEE, 2020, pp. 7050–7057.
  9. Y. Zhang, B. Kang, B. Hooi, S. Yan, and J. Feng, “Deep long-tailed learning: A survey,” arXiv preprint arXiv:2110.04596, 2021.
  10. M. Mendoza, J. I. Vasquez-Gomez, H. Taud, L. E. Sucar, and C. Reta, “Supervised learning of the next-best-view for 3d object reconstruction,” Pattern Recognition Letters, vol. 133, pp. 224–231, 2020.
  11. C. Connolly, “The determination of next best views,” in 1985 IEEE international conference on robotics and automation, vol. 2.   St. Louis, MO, USA: IEEE, 1985, pp. 432–435.
  12. K. A. Tarabanis, P. K. Allen, and R. Y. Tsai, “A survey of sensor planning in computer vision,” IEEE transactions on Robotics and Automation, vol. 11, no. 1, pp. 86–104, 1995.
  13. W. R. Scott, G. Roth, and J.-F. Rivest, “View planning for automated three-dimensional object reconstruction and inspection,” ACM Computing Surveys (CSUR), vol. 35, no. 1, pp. 64–96, 2003.
  14. M. Maboudi, M. Homaei, S. Song, S. Malihi, M. Saadatseresht, and M. Gerke, “A review on viewpoints and path planning for uav-based 3d reconstruction,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023.
  15. A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart, “Receding horizon path planning for 3d exploration and surface inspection,” Autonomous Robots, vol. 42, pp. 291–306, 2018.
  16. R. Monica, J. Aleotti, and D. Piccinini, “Humanoid robot next best view planning under occlusions using body movement primitives,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 2493–2500.
  17. S. Song, D. Kim, and S. Jo, “Online coverage and inspection planning for 3d modeling,” Autonomous Robots, vol. 44, no. 8, pp. 1431–1450, 2020.
  18. S. Song, D. Kim, and S. Choi, “View path planning via online multiview stereo for 3-d modeling of large-scale structures,” IEEE Transactions on Robotics, 2021.
  19. J. I. Vasquez-Gomez, L. E. Sucar, and R. Murrieta-Cid, “Hierarchical ray tracing for fast volumetric next-best-view planning,” in 2013 International Conference on Computer and Robot Vision.   Regina, CANADA: IEEE, 2013, pp. 181–187.
  20. J. Aleotti, D. L. Rizzini, R. Monica, and S. Caselli, “Global registration of mid-range 3d observations and short range next best views,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2014, pp. 3668–3675.
  21. R. Monica and J. Aleotti, “Contour-based next-best view planning from point cloud segmentation of unknown objects,” Autonomous Robots, vol. 42, pp. 443–458, 2018.
  22. W. Peng, Y. Wang, Z. Miao, M. Feng, and Y. Tang, “Viewpoints planning for active 3-d reconstruction of profiled blades using estimated occupancy probabilities (eop),” IEEE Transactions on Industrial Electronics, vol. 68, no. 5, pp. 4109–4119, 2020.
  23. A. K. Burusa, E. J. van Henten, and G. Kootstra, “Attention-driven active vision for efficient reconstruction of plants and targeted plant parts,” arXiv preprint arXiv:2206.10274, 2022.
  24. L. M. Wong, C. Dumont, and M. A. Abidi, “Next best view system in a 3d object modeling task,” in Proceedings 1999 IEEE International Symposium on Computational Intelligence in Robotics and Automation. CIRA’99 (Cat. No. 99EX375).   IEEE, 1999, pp. 306–311.
  25. T. Zaenker, C. Lehnert, C. McCool, and M. Bennewitz, “Combining local and global viewpoint planning for fruit coverage,” in 2021 European Conference on Mobile Robots (ECMR).   IEEE, 2021, pp. 1–7.
  26. C. Wu, R. Zeng, J. Pan, C. C. Wang, and Y.-J. Liu, “Plant phenotyping by deep-learning-based planner for multi-robots,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3113–3120, 2019.
  27. M. Lauri, J. Pajarinen, J. Peters, and S. Frintrop, “Multi-sensor next-best-view planning as matroid-constrained submodular maximization,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5323–5330, 2020.
  28. L. Keselman, J. Iselin Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 1–10.
  29. R. Border, J. D. Gammell, and P. Newman, “Surface edge explorer (see): Planning next best views directly from 3d observations,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 6116–6123.
  30. Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1912–1920.
  31. R. Pito, “A solution to the next best view problem for automated surface acquisition,” IEEE Transactions on pattern analysis and machine intelligence, vol. 21, no. 10, pp. 1016–1030, 1999.
  32. S. Wu, W. Sun, P. Long, H. Huang, D. Cohen-Or, M. Gong, O. Deussen, and B. Chen, “Quality-driven poisson-guided autoscanning,” ACM Transactions on Graphics, vol. 33, no. 6, 2014.
  33. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous robots, vol. 34, no. 3, pp. 189–206, 2013.
  34. R. Menon, T. Zaenker, and M. Bennewitz, “Nbv-sc: Next best view planning based on shape completion for fruit mapping and reconstruction,” 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
  35. R. Monica and J. Aleotti, “Surfel-based next best view planning,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3324–3331, 2018.
  36. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
  37. X. Pan, Z. Lai, S. Song, and G. Huang, “Activenerf: Learning where to see with uncertainty estimation,” in European Conference on Computer Vision.   Springer, 2022, pp. 230–246.
  38. Y. Ran, J. Zeng, S. He, J. Chen, L. Li, Y. Chen, G. Lee, and Q. Ye, “Neurar: Neural uncertainty for autonomous 3d reconstruction with implicit neural representations,” IEEE Robotics and Automation Letters, 2023.
  39. L. Jin, X. Chen, J. Rückin, and M. Popović, “Neu-nbv: Next best view planning using uncertainty estimation in image-based neural rendering,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
  40. R. Border and J. D. Gammell, “Proactive estimation of occlusions and scene coverage for planning next best views in an unstructured representation,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 4219–4226.
  41. Border, Rowan and Gammell, Jonathan D, “The surface edge explorer (see): A measurement-direct approach to next best view planning,” arXiv preprint arXiv:2207.13684, 2022.
  42. S. Kriegel, T. Bodenmüller, M. Suppa, and G. Hirzinger, “A surface-based next-best-view approach for automated 3d model completion of unknown objects,” in 2011 IEEE International Conference on Robotics and Automation.   IEEE, 2011, pp. 4869–4874.
  43. I. D. Lee, J. H. Seo, Y. M. Kim, J. Choi, S. Han, and B. Yoo, “Automatic pose generation for robotic 3-d scanning of mechanical parts,” IEEE Transactions on Robotics, vol. 36, no. 4, pp. 1219–1238, 2020.
  44. M. Krainin, B. Curless, and D. Fox, “Autonomous generation of complete 3d object models using next best view manipulation planning,” in 2011 IEEE International Conference on Robotics and Automation.   Shanghai, PEOPLES R CHINA: IEEE, 2011, pp. 5031–5037.
  45. J. I. Vasquez-Gomez, L. E. Sucar, R. Murrieta-Cid, and E. Lopez-Damian, “Volumetric next-best-view planning for 3d object reconstruction with positioning error,” International Journal of Advanced Robotic Systems, vol. 11, no. 10, p. 159, 2014.
  46. J. I. Vasquez-Gomez, L. E. Sucar, and R. Murrieta-Cid, “View/state planning for three-dimensional object reconstruction under uncertainty,” Autonomous Robots, vol. 41, no. 1, pp. 89–109, 2017.
  47. J. Daudelin and M. Campbell, “An adaptable, probabilistic, next-best view algorithm for reconstruction of unknown 3-d objects,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1540–1547, 2017.
  48. S. Pan and H. Wei, “A global max-flow-based multi-resolution next-best-view method for reconstruction of 3d unknown objects,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 714–721, 2022.
  49. S. Pan and H. Wei, “A global generalized maximum coverage-based solution to the non-model-based view planning problem for object reconstruction,” Computer Vision and Image Understanding, vol. 226, p. 103585, 2023.
  50. S. Lee, L. Chen, J. Wang, A. Liniger, S. Kumar, and F. Yu, “Uncertainty guided policy for active robotic 3d reconstruction using neural radiance fields,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 12 070–12 077, 2022.
  51. N. Sünderhauf, J. Abou-Chakra, and D. Miller, “Density-aware nerf ensembles: Quantifying predictive uncertainty in neural radiance fields,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 9370–9376.
  52. D. Peralta, J. Casimiro, A. M. Nilles, J. A. Aguilar, R. Atienza, and R. Cajote, “Next-best view policy for 3d reconstruction,” in 2020 European Conference on Computer Vision.   Glasgow, UK: Springer, 2020, pp. 558–573.
  53. X. Zeng, T. Zaenker, and M. Bennewitz, “Deep reinforcement learning for next-best-view planning in agricultural applications,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 2323–2329.
  54. Y. Zhou and O. Tuzel, “Voxelnet: End-to-end learning for point cloud based 3d object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4490–4499.
  55. J. I. Vasquez-Gomez, D. Troncoso, I. Becerra, E. Sucar, and R. Murrieta-Cid, “Next-best-view regression using a 3d convolutional neural network,” Machine Vision and Applications, vol. 32, no. 2, pp. 1–14, 2021.
  56. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660.
  57. Y. Han, I. H. Zhan, W. Zhao, and Y.-J. Liu, “A double branch next-best-view network and novel robot system for active object reconstruction,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 7306–7312.
  58. R. Monica and J. Aleotti, “A probabilistic next best view planner for depth cameras based on deep learning,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3529–3536, 2021.
  59. M. Peuzin-Jubert, A. Polette, D. Nozais, J.-L. Mari, and J.-P. Pernot, “Survey on the view planning problem for reverse engineering and automated control applications,” Computer-Aided Design, vol. 141, p. 103094, 2021.
  60. M. D. Kaba, M. G. Uzunbas, and S. N. Lim, “A reinforcement learning approach to the view planning problem,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   Honolulu, HI, USA: IEEE, 2017, pp. 5094–5102.
  61. B. Hepp, M. Nießner, and O. Hilliges, “Plan3d: Viewpoint and trajectory optimization for aerial multi-view stereo reconstruction,” ACM Transactions on Graphics (TOG), vol. 38, no. 1, pp. 1–17, 2018.
  62. W. Jing, C. F. Goh, M. Rajaraman, F. Gao, S. Park, Y. Liu, and K. Shimada, “A computational framework for automatic online path generation of robotic inspection tasks via coverage planning and reinforcement learning,” IEEE Access, vol. 6, pp. 54 854–54 864, 2018.
  63. V. Patidar and R. Tiwari, “Survey of robotic arm and parameters,” in 2016 International conference on computer communication and informatics (ICCCI).   IEEE, 2016, pp. 1–6.
  64. S. Chitta, “Moveit!: an introduction,” Robot Operating System (ROS) The Complete Reference (Volume 1), pp. 3–27, 2016.
  65. I. Enebuse, M. Foo, B. K. K. Ibrahim, H. Ahmed, F. Supmak, and O. S. Eyobu, “A comparative review of hand-eye calibration techniques for vision guided robots,” IEEE Access, 2021.
  66. X.-F. Han, J. S. Jin, M.-J. Wang, W. Jiang, L. Gao, and L. Xiao, “A review of algorithms for filtering the 3d point cloud,” Signal Processing: Image Communication, vol. 57, pp. 103–112, 2017.
  67. M. Held and R. M. Karp, “A dynamic programming approach to sequencing problems,” Journal of the Society for Industrial and Applied mathematics, vol. 10, no. 1, pp. 196–210, 1962.
  68. V. V. Vazirani, “Approximation algorithms (springer science & business media,” 2013.
  69. H. Mittelmann, “Latest benchmark results,” in Proceedings of the INFORMS Annual Conference, Phoenix, AZ, USA, 2018, pp. 4–7.
  70. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning.   pmlr, 2015, pp. 448–456.
  71. A. L. Maas, A. Y. Hannun, A. Y. Ng et al., “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, vol. 30, no. 1.   Atlanta, Georgia, USA, 2013, p. 3.
  72. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  73. V. Krishnamurthy and M. Levoy, “Fitting smooth surfaces to dense polygon meshes,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996, pp. 313–324.
  74. S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab, “Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes,” in Asian conference on computer vision.   Springer, 2012, pp. 548–562.
  75. R. Kaskman, S. Zakharov, I. Shugurov, and S. Ilic, “Homebreweddb: Rgb-d dataset for 6d pose estimation of 3d objects,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019, pp. 0–0.
  76. E. Rodola, A. Albarelli, F. Bergamasco, and A. Torsello, “A scale independent selection process for 3d object recognition in cluttered scenes,” International journal of computer vision, vol. 102, pp. 129–145, 2013.
  77. F. Wang, Y. Zhuang, H. Gu, and H. Hu, “Octreenet: A novel sparse 3-d convolutional neural network for real-time 3-d outdoor scene analysis,” IEEE Transactions on Automation Science and Engineering, vol. 17, no. 2, pp. 735–747, 2019.
  78. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of the 3rd International Conference on Learning Representations, ICLR, 2015.
  79. H. Yervilla-Herrera, J. I. Vasquez-Gomez, R. Murrieta-Cid, I. Becerra, and L. E. Sucar, “Optimal motion planning and stopping test for 3-d object reconstruction,” Intelligent Service Robotics, vol. 12, pp. 103–123, 2019.
  80. H. Yervilla-Herrera, I. Becerra, R. Murrieta-Cid, L. E. Sucar, and E. F. Morales, “Bayesian probabilistic stopping test and asymptotic shortest time trajectories for object reconstruction with a mobile manipulator robot,” Journal of Intelligent & Robotic Systems, vol. 105, no. 4, p. 82, 2022.
  81. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng et al., “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2.   Kobe, Japan, 2009, p. 5.
  82. A. Boulch and R. Marlet, “Poco: Point convolution for surface reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 6302–6314.
  83. A. Nguyen and B. Le, “3d point cloud segmentation: A survey,” in 2013 6th IEEE conference on robotics, automation and mechatronics (RAM).   IEEE, 2013, pp. 225–230.
  84. T. Zaenker, J. Rückin, R. Menon, M. Popović, and M. Bennewitz, “Graph-based view motion planning for fruit detection,” 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
Citations (3)

Summary

  • The paper introduces a novel integration of one-shot view planning with next-best-view strategies to enhance 3D reconstruction efficiency.
  • It employs long-tail multiview sampling to drastically reduce computational and movement costs, achieving a 45% reduction.
  • Experimental results validate the method's high surface coverage and robust generalization on unknown objects in both simulated and real-world scenarios.

Overview of "Integrating One-Shot View Planning with a Single Next-Best View via Long-Tail Multiview Sampling"

The paper under discussion proposes a novel approach for enhancing the efficiency and quality of 3D object reconstruction using an active vision system. This research integrates elements of one-shot view planning with iterative next-best-view (NBV) methods by introducing a long-tail multiview sampling approach. The primary aim of this integration is to ensure high-quality 3D reconstructions while significantly reducing computational and movement costs, thereby addressing the limitations of current view planning paradigms.

Methodological Framework

The paper addresses the challenge of 3D reconstruction by leveraging a combination of existing techniques—iterative NBV planning and one-shot view planning—through a unique strategy. The authors present the hypothesis that additional information about the unknown object improves the predictive capability of view planning systems. This hypothesis is explored in two dimensions: (1) leveraging perception data from initial NBV sampling, and (2) expanding the dataset with multiview inputs generated through long-tail sampling methods.

Key Innovations

  • Long-Tail Multiview Sampling: This sampling strategy acknowledges the long-tail distribution of surface coverage gains over multiple views, emphasizing the importance of cases where a few views cover the majority of an object’s surface while subsequent views contribute minimally. This insight guides the development of a training dataset that efficiently covers essential object information with fewer views.
  • Multiview-Activated SCVP Network: The proposed model is trained on a curated multiview dataset to optimize for high-quality reconstructions with fewer views, demonstrating the efficacy of incorporating additional perception data from a single NBV.

Experimental Results

The paper reports substantial experimental validation through simulated reconstruction scenarios and real-world deployments, showing a significant increase in surface coverage and a marked 45% reduction in movement costs compared to state-of-the-art systems. Moreover, the system demonstrates strong generalization capabilities when applied to unknown objects, emphasizing its practical applicability in real-world settings. The novel combined pipeline is particularly effective, achieving high accuracy and efficiency in object reconstruction tasks.

Implications and Future Directions

The proposed approach has broad implications for robotics and computer vision, particularly in applications requiring efficient and accurate environmental modeling. By optimizing view planning with additional insights from perception data and multiview learning, this research establishes a foundation for future exploration in automated inspection, manufacturing, and search and rescue operations.

Looking forward, possible extensions of this work may involve refining the approach to accommodate dynamic scenes, exploring its integration with neural field representations like NeRF for even higher-quality reconstructions, and further optimizing computational strategies for deployment on constrained hardware platforms. Additionally, investigating the adaptability of this approach to larger-scale scenes and more complex environmental conditions would further enhance its utility.

Overall, the integration of one-shot view planning with iterative methods represents a significant step forward in the field, promising improved efficiency and robustness in autonomous robotic systems.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub

Youtube Logo Streamline Icon: https://streamlinehq.com