Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning-based Incentive Mechanism for Task Freshness-aware Vehicular Twin Migration (2309.04929v1)

Published 10 Sep 2023 in cs.GT

Abstract: Vehicular metaverses are an emerging paradigm that integrates extended reality technologies and real-time sensing data to bridge the physical space and digital spaces for intelligent transportation, providing immersive experiences for Vehicular Metaverse Users (VMUs). VMUs access the vehicular metaverse by continuously updating Vehicular Twins (VTs) deployed on nearby RoadSide Units (RSUs). Due to the limited RSU coverage, VTs need to be continuously online migrated between RSUs to ensure seamless immersion and interactions for VMUs with the nature of mobility. However, the VT migration process requires sufficient bandwidth resources from RSUs to enable online and fast migration, leading to a resource trading problem between RSUs and VMUs. To this end, we propose a learning-based incentive mechanism for migration task freshness-aware VT migration in vehicular metaverses. To quantify the freshness of the VT migration task, we first propose a new metric named Age of Twin Migration (AoTM), which measures the time elapsed of completing the VT migration task. Then, we propose an AoTM-based Stackelberg model, where RSUs act as the leader and VMUs act as followers. Due to incomplete information between RSUs and VMUs caused by privacy and security concerns, we utilize deep reinforcement learning to learn the equilibrium of the Stackelberg game. Numerical results demonstrate the effectiveness of our proposed learning-based incentive mechanism for vehicular metaverses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Junhong Zhang (5 papers)
  2. Jiangtian Nie (22 papers)
  3. Jinbo Wen (27 papers)
  4. Jiawen Kang (204 papers)
  5. Minrui Xu (57 papers)
  6. Xiaofeng Luo (64 papers)
  7. Dusit Niyato (672 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.