Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Distillation for Mobile Edge Computation Offloading (2004.04366v1)

Published 9 Apr 2020 in cs.NI, cs.AI, and cs.DC

Abstract: Edge computation offloading allows mobile end devices to put execution of compute-intensive task on the edge servers. End devices can decide whether offload the tasks to edge servers, cloud servers or execute locally according to current network condition and devices' profile in an online manner. In this article, we propose an edge computation offloading framework based on Deep Imitation Learning (DIL) and Knowledge Distillation (KD), which assists end devices to quickly make fine-grained decisions to optimize the delay of computation tasks online. We formalize computation offloading problem into a multi-label classification problem. Training samples for our DIL model are generated in an offline manner. After model is trained, we leverage knowledge distillation to obtain a lightweight DIL model, by which we further reduce the model's inference delay. Numerical experiment shows that the offloading decisions made by our model outperforms those made by other related policies in latency metric. Also, our model has the shortest inference delay among all policies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Haowei Chen (14 papers)
  2. Liekang Zeng (21 papers)
  3. Shuai Yu (22 papers)
  4. Xu Chen (413 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.