Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

K-Lane: Lidar Lane Dataset and Benchmark for Urban Roads and Highways (2110.11048v3)

Published 21 Oct 2021 in cs.CV, cs.AI, and cs.RO

Abstract: Lane detection is a critical function for autonomous driving. With the recent development of deep learning and the publication of camera lane datasets and benchmarks, camera lane detection networks (CLDNs) have been remarkably developed. Unfortunately, CLDNs rely on camera images which are often distorted near the vanishing line and prone to poor lighting condition. This is in contrast with Lidar lane detection networks (LLDNs), which can directly extract the lane lines on the bird's eye view (BEV) for motion planning and operate robustly under various lighting conditions. However, LLDNs have not been actively studied, mostly due to the absence of large public lidar lane datasets. In this paper, we introduce KAIST-Lane (K-Lane), the world's first and the largest public urban road and highway lane dataset for Lidar. K-Lane has more than 15K frames and contains annotations of up to six lanes under various road and traffic conditions, e.g., occluded roads of multiple occlusion levels, roads at day and night times, merging (converging and diverging) and curved lanes. We also provide baseline networks we term Lidar lane detection networks utilizing global feature correlator (LLDN-GFC). LLDN-GFC exploits the spatial characteristics of lane lines on the point cloud, which are sparse, thin, and stretched along the entire ground plane of the point cloud. From experimental results, LLDN-GFC achieves the state-of-the-art performance with an F1- score of 82.1%, on the K-Lane. Moreover, LLDN-GFC shows strong performance under various lighting conditions, which is unlike CLDNs, and also robust even in the case of severe occlusions, unlike LLDNs using the conventional CNN. The K-Lane, LLDN-GFC training code, pre-trained models, and complete development kits including evaluation, visualization and annotation tools are available at https://github.com/kaist-avelab/k-lane.

Citations (18)

Summary

  • The paper introduces the K-Lane dataset, the first large-scale Lidar-based lane detection resource with over 15,000 annotated frames covering diverse urban and highway conditions.
  • The paper proposes the LLDN-GFC network, a Lidar lane detection model that leverages global feature correlation to achieve a state-of-the-art 82.1% F1-score.
  • The paper offers a comprehensive benchmark suite with training code, pre-trained models, and evaluation tools to facilitate further research in autonomous driving.

Overview of K-Lane: Lidar Lane Dataset and Benchmark for Urban Roads and Highways

The paper "K-Lane: Lidar Lane Dataset and Benchmark for Urban Roads and Highways," presents a novel contribution to the field of lane detection in autonomous driving systems. It primarily addresses the limitations in conventional camera-based lane detection networks (CLDNs) and introduces the KAIST-Lane (K-Lane) dataset as a comprehensive Lidar-based solution. The research underscores Lidar's advantages over camera systems in providing robust lane detection, particularly under challenging lighting conditions and when projecting to a bird’s eye view (BEV) for motion planning.

Key Contributions

The paper's contributions are as follows:

  • K-Lane Dataset: This is presented as the world's first and largest publicly available Lidar-based lane detection dataset, encompassing over 15,000 frames. The dataset includes various challenging road conditions, such as multiple occlusion levels, daytime and nighttime environments, and complex lane geometries, including merging and curves. Each frame is annotated with up to six lanes, offering a diverse set for training and evaluation.
  • LLDN-GFC Baseline: The authors propose a Lidar lane detection network utilizing a global feature correlator (LLDN-GFC) as a baseline. This model capitalizes on the spatial characteristics of lane lines in point clouds, effectively handling the thin and long-stretched nature of lanes across the ground plane. LLDN-GFC demonstrated a state-of-the-art performance, achieving an F1-score of 82.1% on the K-Lane dataset.
  • Benchmark and Evaluation Tools: Alongside the dataset, the paper introduces a suite of development tools, including code for training, pre-trained models, evaluation kits, and visualization tools. These resources are aimed at fostering further research and facilitating the adoption of Lidar for lane detection tasks.

Numerical Results

The LLDN-GFC model's performance exhibits noticeable improvements in robustness against varying light conditions and severe occlusions, a noted disadvantage in conventional CNN-based approaches. This robustness is quantitatively supported by an F1-score of 82.1%, indicating high precision and recall in lane detection tasks across the diverse conditions encapsulated in the K-Lane dataset.

Implications and Future Directions

The introduction of the K-Lane dataset and LLDN-GFC significantly opens up new avenues in autonomous driving research. Lidar technology's innate resilience against light variations and its ability to maintain BEV integrity positions it as a superior candidate for lane detection in real-world applications. Moreover, the detailed annotations and extensive conditions captured in K-Lane provide a sandbox for developing and testing novel algorithms in Lidar-based perception.

The LLDN's architecture, leveraging global feature correlation with Transformer and Mixer blocks, hints at a direction toward more sophisticated feature extraction methods that can generalize better across different environmental conditions. Future developments could focus on optimizing the model's computational efficiency and extending its applicability to other autonomous driving tasks. Additionally, given the integration capabilities of Lidar with camera systems, research into sensor fusion could yield comprehensive and robust perception solutions.

In conclusion, "K-Lane: Lidar Lane Dataset and Benchmark for Urban Roads and Highways" establishes an essential resource and a foundation for advancing robust lane detection mechanisms, underscoring the necessity of public datasets in propelling the field of autonomous vehicle research.

Youtube Logo Streamline Icon: https://streamlinehq.com