Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speed-up Heuristic for an On-Demand Ride-Pooling Algorithm (2007.14877v1)

Published 29 Jul 2020 in eess.SY, cs.SY, and math.OC

Abstract: With ongoing developments in digitalization and advances in the field of autonomous driving, on-demand ride pooling is a mobility service with the potential to disrupt the urban mobility market. Nevertheless, to apply this kind of service successfully efficient algorithms have to be implemented for effective fleet management to exploit the benefits associated with this mobility service. Especially real time computation of finding beneficial assignments is a problem not solved for large problem sizes until today. In this study, we show the importance of using advanced algorithms by comparing a fast, but simple insertion heuristic algorithm with a state-of-the-art multi-step matching algorithm. We test the algorithms in various scenarios based on private vehicle trip OD-data for Munich, Germany. Results indicate that in the tested scenarios by using the multi-step algorithm up to 8$\%$ additional requests could be served while also 10$\%$ additional driven distance could be saved. However, computational time for finding optimal assignments in the advanced algorithm exceeds real time rather fast as problem size increases. Therefore, several aspects to reduce the computational time by decreasing redundant checks of the advanced multi step algorithm are introduced. Finally, a refined vehicle selection heuristic based on three rules is presented to furthermore reduce the computational effort. In the tested scenarios this heuristic can speed up the most cost intensive algorithm step by a factor of over 8, while keeping the number of served requests almost constant and maintaining around 70$\%$ of the driven distance saved in the system. Considering all algorithm steps, an overall speed up of 2.5 could be achieved.

Citations (22)

Summary

We haven't generated a summary for this paper yet.