4D Panoptic LiDAR Segmentation (2102.12472v2)
Abstract: Temporal semantic scene understanding is critical for self-driving cars or robots operating in dynamic environments. In this paper, we propose 4D panoptic LiDAR segmentation to assign a semantic class and a temporally-consistent instance ID to a sequence of 3D points. To this end, we present an approach and a point-centric evaluation metric. Our approach determines a semantic class for every point while modeling object instances as probability distributions in the 4D spatio-temporal domain. We process multiple point clouds in parallel and resolve point-to-instance associations, effectively alleviating the need for explicit temporal data association. Inspired by recent advances in benchmarking of multi-object tracking, we propose to adopt a new evaluation metric that separates the semantic and point-to-instance association aspects of the task. With this work, we aim at paving the road for future developments of temporal LiDAR panoptic perception.
- Mehmet Aygün (9 papers)
- Aljoša Ošep (36 papers)
- Mark Weber (32 papers)
- Maxim Maximov (9 papers)
- Cyrill Stachniss (98 papers)
- Jens Behley (50 papers)
- Laura Leal-Taixé (74 papers)