Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

To Offload or Not To Offload: Model-driven Comparison of Edge-native and On-device Processing (2504.15162v2)

Published 21 Apr 2025 in cs.DC

Abstract: Computational offloading is a promising approach for overcoming resource constraints on client devices by moving some or all of an application's computations to remote servers. With the advent of specialized hardware accelerators, client devices are now able to perform fast local processing of specific tasks, such as machine learning inference, reducing the need for offloading computations. However, edge servers with accelerators also offer faster processing for offloaded tasks than was previously possible. In this paper, we present an analytic and experimental comparison of on-device processing and edge offloading for a range of accelerator, network, and application workload scenarios, with the goal of understanding when to use local on-device processing and when to offload computations. We present models that leverage analytical queuing results to capture the effects of dynamic factors such as the performance gap between the device and edge server, network variability, server load, and multi-tenancy on the edge server. We experimentally demonstrate the accuracy of our models for a range of hardware and application scenarios and show that our models achieve a mean absolute percentage error of 2.2% compared to observed latencies. We use our models to develop an adaptive resource manager for intelligent offloading and show its efficacy in the presence of variable network conditions and dynamic multi-tenant edge settings.

Summary

We haven't generated a summary for this paper yet.