Papers
Topics
Authors
Recent
2000 character limit reached

Double-Edge-Assisted Computation Offloading

Updated 10 December 2025
  • Double-edge-assisted computation offloading is a paradigm that partitions computational tasks between a near edge server (e.g., UAV) and a remote, resource-rich server (e.g., satellite) to optimize energy consumption and delay.
  • The architecture integrates hierarchical, multi-tiered setups using layered optimization, such as alternating optimization and reinforcement learning, to adapt to dynamic channel and resource conditions.
  • Performance evaluations indicate a potential 25–35% energy saving and improved responsiveness by dynamically adjusting offloading ratios under varying workload and channel conditions.

Double-edge-assisted computation offloading refers to a paradigm in distributed mobile edge computing architectures wherein computational workloads generated by resource-constrained terminal devices are concurrently and adaptively partitioned between two classes of edge servers. Typical realizations involve one edge server positioned closer to the mobile user (e.g., a UAV-based edge node or an in-network computing node), and a more remote but more resource-rich edge platform (e.g., a satellite-borne server or a metropolitan-level MEC facility). The goal is to jointly optimize workload division, offloading modes, and the allocation of communication and computational resources to minimize network-wide energy consumption or end-to-end service latency under dynamic channel and resource constraints, as exemplified by recent frameworks for space–air–marine integrated networks (SAMINs) and COIN-assisted MEC systems (Wang et al., 3 Dec 2025, Aliyu et al., 8 Apr 2024).

1. Double-Edge Offloading Architectures

Double-edge-assisted offloading architectures are characterized by a hierarchical, multi-tiered structure. In the SAMIN context, the system comprises maritime autonomous surface ships (MASSs), multiple UAVs each acting as an edge computing gateway (first edge), and a low earth orbit (LEO) satellite equipped with a high-capacity edge server (second edge). MASSs can split computational workloads and offload portions in parallel to both their serving UAV and the satellite through orthogonal frequency division multiple access (OFDMA) links (C-band uplinks to UAVs, Ka-band to the satellite). The process is time-slotted, with MASSs issuing offloading requests, UAVs reporting status, and offloading/resource allocation decisions centrally coordinated by the satellite (Wang et al., 3 Dec 2025).

In COIN-assisted MEC, a two-tier structure consists of user equipment (UEs), in-network COIN nodes (first edge), and a remote MEC server (second edge). A digital twin (DT) layer synchronizes real-time system state, enabling dynamic decision-making by replicating both UE and MEC resource status, including processing rates and deviations (Aliyu et al., 8 Apr 2024).

2. System Model and Key Parameters

The double-edge model defines several critical resource-allocation and workload-partitioning variables:

  • For each user/ship MmnM_{mn}: SmnS_{mn} is the total input data in bits, smns_{mn} is the offloaded portion, and amn[0,1]a_{mn}\in[0,1] is the fraction sent to the first edge (UAV/COIN node), with (1amn)smn(1-a_{mn})s_{mn} sent to the second edge (satellite/MEC).
  • Computing resources: ρmnl\rho_{mn}^l, ρmnU\rho_{mn}^U, ρmnL\rho_{mn}^L denote CPU cycles/s allocated locally, at the UAV/COIN node, and at the satellite/MEC, respectively.
  • Channel parameters include link bandwidths (WUW^U, WLW^L), channel gains (gmnUg^U_{mn} for UAV, hmnLh^L_{mn} for satellite), and noise power (σ2\sigma^2, N0N_0).
  • Energy and delay metrics are composed of communication and computation components: transmission rates follow Shannon’s formula, and delays/energies are modeled as functions of task partitioning and resource allocation (Wang et al., 3 Dec 2025, Aliyu et al., 8 Apr 2024).

A typical design also incorporates time-varying channel realizations, device mobility, and explicit hard constraints on processing delay (TmnmaxT_{mn}^{\max}), coverage duration, and per-server energy budgets.

3. Optimization Problem Formulation

The principal objective is to minimize total system energy consumption by optimally splitting each user’s task and configuring resource assignments, subject to delay and capacity constraints. The primary optimization variables are {amn,smn,ρmnU,ρmnL}\{a_{mn}, s_{mn}, \rho^U_{mn}, \rho^L_{mn}\} for all users/UEs. The overall cost function takes the form

min{amn,smn,ρmnU,ρmnL}Etot=m,nEmntot\min_{\{a_{mn}, s_{mn}, \rho^U_{mn}, \rho^L_{mn}\}} \quad E^{\text{tot}} = \sum_{m,n} E^{\text{tot}}_{mn}

where detailed energy accounting sums per-user computation and communication energy across all tiers. Constraints include:

  • 0amn10 \leq a_{mn} \leq 1, 0smnSmn0 \leq s_{mn} \leq S_{mn} (partitioning validity).
  • End-to-end delay TmntotTmnmaxT^{\mathrm{tot}}_{mn} \leq T^{\max}_{mn} and processing time per offloaded branch under available coverage/visibility intervals.
  • Cumulative UAV/COIN and satellite/MEC CPU allocations no greater than their respective capacities.
  • Power and energy budgets at each node.

The resulting problem is non-convex due to couplings in exponential rate functions and nonlinear allocation terms, as well as the mixed integer-continuous nature of the offloading ratio (Wang et al., 3 Dec 2025).

4. Solution Methodologies

A layered alternating optimization (AO) strategy is employed to achieve tractable global optimization:

  • Layer 1: For fixed computation resource allocations, the problem reduces to optimizing offloading ratios amna_{mn} and workload volumes smns_{mn}. Exploiting their convexity (per fixed resource assignment), each scalar variable is solved by a multi-round iterative search (MRIS), bisection, or closed-form rate conditions.
  • Layer 2: For fixed amna_{mn}, smns_{mn}, each edge server’s CPU-resource allocation subproblem admits a closed-form solution from KKT conditions, such as

ρmnU=amnsmncmTmnmaxtmnU,ρmnL=(1amn)smncLTmnmaxtmnL\rho_{mn}^{U*} = \frac{a_{mn}s_{mn}c_m}{T_{mn}^{\max}-t^{U}_{mn}},\quad \rho_{mn}^{L*} = \frac{(1-a_{mn})s_{mn}c^L}{T_{mn}^{\max}-t^{L}_{mn}}

The AO procedure iterates between Layer 1 and Layer 2 until convergence is achieved in both workload partition and resource allocation.

In distributed COIN/MEC settings, game-theoretic offloading is modeled as an exact potential game—ensuring Nash equilibrium exists and can be achieved with finite improvement paths. The optimal offloading/resource ratio assignment (ORRA) is further refined using double deep Q-network (DDQN) reinforcement learning, leveraging a Markov decision process framework driven by digital twin state estimates. The reward function considers latency gains and resource cost per cycle, and the DDQN architecture provides proactive prediction of optimal offloading strategies (Wang et al., 3 Dec 2025, Aliyu et al., 8 Apr 2024).

5. Performance Evaluation

Evaluations demonstrate that the AO-based double-edge scheme effectively minimizes total energy consumption under various network configurations.

Scheme Energy Savings (over EOS) Convergence Speed Notes
Proposed AO 25–35% ~5–10 iterations For varying Smn,N,ρlS_{mn},N,\rho^l
POMT (binary) 15–25% Fast All-or-none offloading
EOS Moderate Equal task split
EACR 15–25% Equal CPU allocation

Comparative simulations for MASS-UAV-satellite networks illustrate that longer allowable transmission time enables lower energy at the expense of higher latency. For large tasks, offloading ratio amna_{mn} shifts towards satellite, whereas small and latency-critical tasks benefit from predominately UAV offloading. There exists an optimal (amn,smn)(a_{mn}^*, s_{mn}^*) that minimizes energy for given channels and delay bounds (Wang et al., 3 Dec 2025).

In COIN/MEC architectures, the DDQN-EPG algorithm outperforms random and standard MEC schemes by 20–64% with respect to average system utility, depending on the number of UEs, CNs, and task characteristics (Aliyu et al., 8 Apr 2024).

6. Main Insights and Design Implications

Analyses indicate that double-edge offloading robustly adapts to channel fluctuations, device mobility, workload heterogeneity, and stringent latency requirements:

  • UAVs or COIN nodes are favored for low-latency, lightweight tasks or under stringent delay bounds, increasing system responsiveness.
  • Satellites or higher-tier MEC nodes provide surplus computation for large workloads or when the proximate edge tier is saturated.
  • Task partitioning and offloading ratios should dynamically adapt based on instantaneous channel gains (gmnUg^U_{mn}, hmnLh^L_{mn}), server visibility windows, and resource loads.
  • Joint optimization of workload split and resource allocation balances communication and computation energy expenditure, providing energy-delay trade-offs.

Design guidelines suggest scaling UAV/COIN CPU capacity to typical downstream load, reserving satellite/MEC capacity as offloading overflow. Rapid AO or DDQN-driven updates are essential to track mobility and time-varying wireless conditions, particularly in 6G-era integrated networks (Wang et al., 3 Dec 2025, Aliyu et al., 8 Apr 2024).

7. Practical Applications and Outlook

Double-edge-assisted computation offloading has prominent applicability in maritime, aerial, and industrial IoT scenarios, especially where ultra-low latency and energy efficiency are required under sporadic connectivity and heterogeneous processing needs. The methodology provides a template for hybrid edge-cloud systems that exploit spatial and resource diversity in next-generation networks. A plausible implication is that as 6G deployments progress, such adaptive double-edge orchestration will be critical for scalable, resilient, and green digital infrastructure.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Double-Edge-Assisted Computation Offloading.