Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

i-Rebalance: Personalized Vehicle Repositioning for Supply Demand Balance (2401.04429v2)

Published 9 Jan 2024 in cs.AI and cs.MA

Abstract: Ride-hailing platforms have been facing the challenge of balancing demand and supply. Existing vehicle reposition techniques often treat drivers as homogeneous agents and relocate them deterministically, assuming compliance with the reposition. In this paper, we consider a more realistic and driver-centric scenario where drivers have unique cruising preferences and can decide whether to take the recommendation or not on their own. We propose i-Rebalance, a personalized vehicle reposition technique with deep reinforcement learning (DRL). i-Rebalance estimates drivers' decisions on accepting reposition recommendations through an on-field user study involving 99 real drivers. To optimize supply-demand balance and enhance preference satisfaction simultaneously, i-Rebalance has a sequential reposition strategy with dual DRL agents: Grid Agent to determine the reposition order of idle vehicles, and Vehicle Agent to provide personalized recommendations to each vehicle in the pre-defined order. This sequential learning strategy facilitates more effective policy training within a smaller action space compared to traditional joint-action methods. Evaluation of real-world trajectory data shows that i-Rebalance improves driver acceptance rate by 38.07% and total driver income by 9.97%.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. Deeppool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 20(12): 4714–4727.
  2. Distributed energy trading and scheduling among microgrids via multiagent reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems.
  3. Spatio-temporal capsule-based reinforcement learning for mobility-on-demand coordination. IEEE Transactions on Knowledge and Data Engineering.
  4. Real-world ride-hailing vehicle repositioning using deep reinforcement learning. Transportation Research Part C: Emerging Technologies, 130: 103289.
  5. Coride: joint order dispatching and fleet management for multi-scale ride-hailing platforms. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 1983–1992.
  6. Efficient ridesharing order dispatching with mean field multi-agent reinforcement learning. In The World Wide Web Conference, 983–994.
  7. Efficient large-scale fleet management via multi-agent deep reinforcement learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1774–1783.
  8. Meta: A city-wide taxi repositioning framework based on multi-agent reinforcement learning. IEEE Transactions on Intelligent Transportation Systems.
  9. Deep dispatching: A deep reinforcement learning approach for vehicle dispatching on online ride-hailing platform. Transportation Research Part E: Logistics and Transportation Review, 161: 102694.
  10. Context-aware taxi dispatching at city-scale using deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems.
  11. OD morphing: Balancing simplicity with faithfulness for OD bundling. IEEE Transactions on Visualization and Computer Graphics, 26(1): 811–821.
  12. Dispatch of autonomous vehicles for taxi services: A deep reinforcement learning approach. Transportation Research Part C: Emerging Technologies, 115: 102626.
  13. An effective fleet management strategy for collaborative spatio-temporal searching: GIS cup. In Proceedings of the 28th International Conference on Advances in Geographic Information Systems, 651–654.
  14. MOVI: A model-free approach to dynamic fleet management. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications, 2708–2716. IEEE.
  15. Reinforcement learning for ridesharing: An extended survey. Transportation Research Part C: Emerging Technologies, 144: 103852.
  16. TaxiRec: Recommending Road Clusters to Taxi Drivers Using Ranking-Based Extreme Learning Machines. IEEE Transactions on Knowledge and Data Engineering, 30(03): 585–598.
  17. Traffic flow prediction via spatial temporal graph neural network. In Proceedings of The Web Conference 2020, 1082–1092.
  18. Rebalancing shared mobility-on-demand systems: A reinforcement learning approach. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems, 220–225.
  19. Hmdrl: Hierarchical mixed deep reinforcement learning to balance vehicle supply and demand. IEEE Transactions On Intelligent Transportation Systems, 23(11): 21861–21872.
  20. Xie; et al. 2018. PrivateHunt: Multi-source data-driven dispatching in for-hire vehicle systems. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(1): 1–26.
  21. Taxi dispatch planning via demand and destination modeling. In 2018 IEEE 43rd Conference on Local Computer Networks, 377–384.
  22. Multi-agent reinforcement learning to unify order-matching and vehicle-repositioning in ride-hailing services. International Journal of Geographical Information Science, 37(2): 380–402.
  23. When recommender systems meet fleet management: Practical study in online driver repositioning system. In Proceedings of The Web Conference 2020, 2220–2229.
  24. Multiagent reinforcement learning-based taxi predispatching model to balance taxi supply and demand. Journal of Advanced Transportation, 2020.
  25. CGAIL: Conditional generative adversarial imitation learning—An application in taxi Drivers’ strategy learning. IEEE Transactions on Big Data.
  26. Trajgail: Trajectory generative adversarial imitation learning for long-term decision analysis. In 2020 IEEE International Conference on Data Mining (ICDM), 801–810. IEEE.
  27. Taxi drivers’ cruising patterns—Insights from taxi GPS traces. IEEE Transactions on Intelligent Transportation Systems, 20(2): 571–582.
Citations (4)

Summary

We haven't generated a summary for this paper yet.