Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incentivizing Proof-of-Stake Blockchain for Secured Data Collection in UAV-Assisted IoT: A Multi-Agent Reinforcement Learning Approach (2207.02705v1)

Published 6 Jul 2022 in cs.NI, cs.IT, and math.IT

Abstract: The Internet of Things (IoT) can be conveniently deployed while empowering various applications, where the IoT nodes can form clusters to finish certain missions collectively. In this paper, we propose to employ unmanned aerial vehicles (UAVs) to assist the clustered IoT data collection with blockchain-based security provisioning. In particular, the UAVs generate candidate blocks based on the collected data, which are then audited through a lightweight proof-of-stake consensus mechanism within the UAV-based blockchain network. To motivate efficient blockchain while reducing the operational cost, a stake pool is constructed at the active UAV while encouraging stake investment from other UAVs with profit sharing. The problem is formulated to maximize the overall profit through the blockchain system in unit time by jointly investigating the IoT transmission, incentives through investment and profit sharing, and UAV deployment strategies. Then, the problem is solved in a distributed manner while being decoupled into two layers. The inner layer incorporates IoT transmission and incentive design, which are tackled with large-system approximation and one-leader-multi-follower Stackelberg game analysis, respectively. The outer layer for UAV deployment is undertaken with a multi-agent deep deterministic policy gradient approach. Results show the convergence of the proposed learning process and the UAV deployment, and also demonstrated is the performance superiority of our proposal as compared with the baselines.

Citations (17)

Summary

We haven't generated a summary for this paper yet.