SAMINs: Integrating Space, Air, & Marine Networks
- SAMINs are integrated multi-tier networks that combine space, aerial, and marine nodes to enable efficient, double-edge computational offloading.
- They partition workloads between proximal resources like UAVs or COIN nodes and remote servers such as LEO satellites or MEC servers, optimizing latency and energy use.
- Advanced methods, including alternating optimization and DDQN-based reinforcement learning, achieve up to 35% energy savings and significant delay reductions.
Double-edge-assisted computation offloading refers to a framework in which computational workloads from edge devices (such as maritime autonomous surface ships or user equipments) are partitioned and offloaded, concurrently or partially, to two distinct edge resources—typically a proximate aerial or terrestrial node (e.g., UAV, COIN node) and a remote or high-capacity edge server (e.g., LEO satellite, MEC server). This paradigm aims for joint energy efficiency and latency minimization through optimal offloading mode selection, volume partitioning, and resource allocation under realistic network and device constraints, leveraging multi-access communications and advanced distributed optimization techniques (Wang et al., 3 Dec 2025, Aliyu et al., 8 Apr 2024).
1. Network Architectures and Modeling Paradigms
Double-edge-assisted offloading has been proposed in heterogeneous architectures exhibiting strong multi-tier characteristics. In Space-Air-Marine Integrated Networks (SAMINs), M MASSs are grouped under M UAVs, with all UAVs and a LEO satellite equipped with edge servers. MASSs can split their computation tasks, offloading portions simultaneously to both the serving UAV and the LEO satellite via OFDMA, under slotted-time operation. Control signaling involves MASSs sending offloading requests, UAVs updating their resource status to the satellite, and the satellite broadcasting centralized offloading/resource-allocation assignments (Wang et al., 3 Dec 2025). Similarly, in C-MEC systems supporting Industrial IoT, UEs connect to both in-network computing nodes (COIN nodes, CNs) and a central MEC server, leveraging URLLC links. Digital Twin (DT) layers are established to maintain real-time replicas of device and resource states, enabling accurate latency evaluation and future resource predictions (Aliyu et al., 8 Apr 2024).
2. Offloading Scheme and Resource Variables
In double-edge scenarios, the offloading split is governed by key variables:
- For each edge device, the total input bits may be partitioned such that bits are offloaded; within those, a fraction is sent to the first edge (UAV, COIN node), and to the second (LEO, MEC server).
- CPU cycles per input bit and per-cycle energy coefficients are device-specific, demarcated as , and .
- Constraints encompass CPU capacities ( for UAVs/satellites/MEC), power and energy budgets, maximum tolerable delay (), and physical coverage limits.
- In C-MEC, offloading decision variables , offloading ratios (to COIN nodes) and (to MEC server), and CPU allocation shares are jointly optimized, subject to resource and assignment constraints (Aliyu et al., 8 Apr 2024).
The table below summarizes key offloading variables in double-edge-assisted paradigms:
| System | Edge 1 (Local/Proximal) | Edge 2 (Remote/Central) | Offloading Partition |
|---|---|---|---|
| SAMIN | UAV | LEO Satellite | |
| C-MEC / COIN | COIN node (CN) | MEC server (ES) |
3. Problem Formulation: Joint Optimization Under Constraints
The goal is to minimize total system energy under latency, resource, and operational constraints. For SAMINs, the objective is:
where aggregates local and offloading communication/computation energy, constrained by delay, CPU, distance, power, and energy budgets. The main equations involve:
- Link rate via Shannon’s formula, e.g., and .
- Transmission time and communication energy for each path.
- Computation delay and energy for local, UAV, and satellite computations.
- End-to-end delay and total per-device energy, and .
For C-MEC, the utility of UE is given as:
subject to assignment, delay, and CPU allocation constraints. The end-to-end latency and execution cost are tightly coupled to partial offloading ratios and resource shares (Aliyu et al., 8 Apr 2024).
4. Solution Methods: Alternating and Distributed Optimization
In SAMIN contexts, the optimization problem’s non-convexity (arising from the coupling and mixed integer-continuous offloading splits) is addressed via an Alternating Optimization (AO) procedure, decomposed into two layers:
- Layer 1: Offloading mode and volume , solved for convex subproblems by multi-round iterative search (MRIS).
- Layer 2: Computation resource allocation , solved by convex optimization and KKT conditions, yielding closed-form CPU assignments.
Iterative AO cycles converge in 5–10 iterations to the joint optimal offloading ratio and resource allocation (Wang et al., 3 Dec 2025).
In C-MEC/COIN, a low-complexity distributed offloading scheme leverages game theory, modeling the assignment as an exact potential game (EPG), guaranteeing existence and attainability of Nash equilibria via better-response updates. For dynamic, proactive decision making (including resource allocation and offloading ratio prediction), a Double Deep Q-Network (DDQN) is deployed, operating over real and simulated digital twin states. The DDQN refines online and target Q-networks, using experience replay and discounted rewards to provide robust, latency-optimal offloading policies, outperforming random strategies and pure MEC baselines (Aliyu et al., 8 Apr 2024).
5. Performance Analysis and Benchmarks
Simulated SAMIN deployments (1 LEO, 4 UAVs, 5 MASSs/UAV) with real-world settings (C-band 12 MHz, Ka-band 15 MHz, CPU cycles/bit) reveal:
- AO scheme converges efficiently (~5–10 iterations).
- Longer transmission time reduces energy but increases latency; increased task volume shifts offloading to satellite (lower ).
- The optimal solution minimizes energy for given channel/delay parameters, demonstrating 25–35% energy savings over equal-share (EOS) and 15–25% over local/entire-edge routing (POMT/EACR) across varying input sizes, device numbers, and CPU resources (Wang et al., 3 Dec 2025).
C-MEC benchmarks (DDQN-EPG, EPG-Rand, MEC) show DDQN-EPG achieving up to 20% higher utility, with utility improvements of 36–87% across tasks and an improvement of 47–64% in aggregate utility versus baselines as the number of UEs and CNs increases (Aliyu et al., 8 Apr 2024).
6. System-Level Insights and Design Guidelines
Double-edge offloading balances edge proximity (for latency) and central resource abundance (for heavy/overflow workloads). UAVs or COIN nodes handle low-latency, lightweight tasks, while satellites or MEC servers absorb large-size tasks or when local edge capacity is exhausted. The joint optimization framework dynamically tunes offloading fractions and resource allocations based on channel states, coverage times, and mobility, yielding robust operation under highly variable conditions.
Recommended design paths include:
- Sizing edge-server CPU in proportion to anticipated local load.
- Reserving central/server resources for peak or overflow situations.
- Dynamically adapting offloading fractions and resource allocations guided by channel and coverage state.
- Employing fast, iterative update schemes (AO, DDQN) for mobility and demand tracking.
A plausible implication is substantial energy savings and latency improvements for mission-critical, resource-constrained, multi-tier networks targeted for 6G deployments or industrial IoT settings.
7. Relation to Broader Research Directions
The double-edge paradigm generalizes partial and hybrid offloading concepts seen in multi-access edge computing and in-network computation, advancing from single-resource schemes to collaborative resource partitioning for energy and delay minimization. Integration with digital twin frameworks and reinforcement learning-driven optimization (DDQN) signals convergence with cyber-physical system orchestration and intelligent resource management under strict QoS and user utility constraints. Continued research may focus on expanding solution frameworks toward multi-agent, multi-objective settings and integrating security, robustness, and adaptive learning to further enhance system utility and resilience (Wang et al., 3 Dec 2025, Aliyu et al., 8 Apr 2024).