Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision-Based Robust Lane Detection and Tracking under Different Challenging Environmental Conditions (2210.10233v3)

Published 19 Oct 2022 in cs.CV

Abstract: Lane marking detection is fundamental for both advanced driving assistance systems. However, detecting lane is highly challenging when the visibility of a road lane marking is low due to real-life challenging environment and adverse weather. Most of the lane detection methods suffer from four types of challenges: (i) light effects i.e., shadow, glare of light, reflection etc.; (ii) Obscured visibility of eroded, blurred, colored and cracked lane caused by natural disasters and adverse weather; (iii) lane marking occlusion by different objects from surroundings (wiper, vehicles etc.); and (iv) presence of confusing lane like lines inside the lane view e.g., guardrails, pavement marking, road divider etc. Here, we propose a robust lane detection and tracking method with three key technologies. First, we introduce a comprehensive intensity threshold range (CITR) to improve the performance of the canny operator in detecting low intensity lane edges. Second, we propose a two-step lane verification technique, the angle based geometric constraint (AGC) and length-based geometric constraint (LGC) followed by Hough Transform, to verify the characteristics of lane marking and to prevent incorrect lane detection. Finally, we propose a novel lane tracking technique, by defining a range of horizontal lane position (RHLP) along the x axis which will be updating with respect to the lane position of previous frame. It can keep track of the lane position when either left or right or both lane markings are partially and fully invisible. To evaluate the performance of the proposed method we used the DSDLDE [1] and SLD [2] dataset with 1080x1920 and 480x720 resolutions at 24 and 25 frames/sec respectively. Experimental results show that the average detection rate is 97.55%, and the average processing time is 22.33 msec/frame, which outperform the state of-the-art method.

Citations (20)

Summary

We haven't generated a summary for this paper yet.