Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Task-relevant Representation Learning for Networked Robotic Perception (2011.03216v1)

Published 6 Nov 2020 in cs.RO, cs.CV, cs.IT, cs.NI, cs.SY, eess.SY, and math.IT

Abstract: Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today's representations for sensory data are mostly designed for human, not robotic, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Manabu Nakanoya (3 papers)
  2. Sandeep Chinchali (41 papers)
  3. Alexandros Anemogiannis (1 paper)
  4. Akul Datta (4 papers)
  5. Sachin Katti (20 papers)
  6. Marco Pavone (314 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.