Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GrASPE: Graph based Multimodal Fusion for Robot Navigation in Unstructured Outdoor Environments (2209.05722v3)

Published 13 Sep 2022 in cs.RO

Abstract: We present a novel trajectory traversability estimation and planning algorithm for robot navigation in complex outdoor environments. We incorporate multimodal sensory inputs from an RGB camera, 3D LiDAR, and the robot's odometry sensor to train a prediction model to estimate candidate trajectories' success probabilities based on partially reliable multi-modal sensor observations. We encode high-dimensional multi-modal sensory inputs to low-dimensional feature vectors using encoder networks and represent them as a connected graph. The graph is then used to train an attention-based Graph Neural Network (GNN) to predict trajectory success probabilities. We further analyze the number of features in the image (corners) and point cloud data (edges and planes) separately to quantify their reliability to augment the weights of the feature graph representation used in our GNN. During runtime, our model utilizes multi-sensor inputs to predict the success probabilities of the trajectories generated by a local planner to avoid potential collisions and failures. Our algorithm demonstrates robust predictions when one or more sensor modalities are unreliable or unavailable in complex outdoor environments. We evaluate our algorithm's navigation performance using a Spot robot in real-world outdoor environments. We observe an increase of 10-30% in terms of navigation success rate and a 13-15% decrease in false positive estimations compared to the state-of-the-art navigation methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kasun Weerakoon (21 papers)
  2. Adarsh Jagan Sathyamoorthy (23 papers)
  3. Jing Liang (89 papers)
  4. Tianrui Guan (29 papers)
  5. Utsav Patel (9 papers)
  6. Dinesh Manocha (366 papers)
Citations (20)