Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision-Based Autonomous Vehicle Control using the Two-Point Visual Driver Control Model (1910.04862v1)

Published 29 Sep 2019 in cs.CV and cs.RO

Abstract: This work proposes a new self-driving framework that uses a human driver control model, whose feature-input values are extracted from images using deep convolutional neural networks (CNNs). The development of image processing techniques using CNNs along with accelerated computing hardware has recently enabled real-time detection of these feature-input values. The use of human driver models can lead to more "natural" driving behavior of self-driving vehicles. Specifically, we use the well-known two-point visual driver control model as the controller, and we use a top-down lane cost map CNN and the YOLOv2 CNN to extract feature-input values. This framework relies exclusively on inputs from low-cost sensors like a monocular camera and wheel speed sensors. We experimentally validate the proposed framework on an outdoor track using a 1/5th-scale autonomous vehicle platform.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Justin Zheng (1 paper)
  2. Kazuhide Okamoto (8 papers)
  3. Panagiotis Tsiotras (110 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.