Papers
Topics
Authors
Recent
2000 character limit reached

SAMINs: Integrating Space, Air, & Marine Networks

Updated 10 December 2025
  • SAMINs are integrated multi-tier networks that combine space, aerial, and marine nodes to enable efficient, double-edge computational offloading.
  • They partition workloads between proximal resources like UAVs or COIN nodes and remote servers such as LEO satellites or MEC servers, optimizing latency and energy use.
  • Advanced methods, including alternating optimization and DDQN-based reinforcement learning, achieve up to 35% energy savings and significant delay reductions.

Double-edge-assisted computation offloading refers to a framework in which computational workloads from edge devices (such as maritime autonomous surface ships or user equipments) are partitioned and offloaded, concurrently or partially, to two distinct edge resources—typically a proximate aerial or terrestrial node (e.g., UAV, COIN node) and a remote or high-capacity edge server (e.g., LEO satellite, MEC server). This paradigm aims for joint energy efficiency and latency minimization through optimal offloading mode selection, volume partitioning, and resource allocation under realistic network and device constraints, leveraging multi-access communications and advanced distributed optimization techniques (Wang et al., 3 Dec 2025, Aliyu et al., 8 Apr 2024).

1. Network Architectures and Modeling Paradigms

Double-edge-assisted offloading has been proposed in heterogeneous architectures exhibiting strong multi-tier characteristics. In Space-Air-Marine Integrated Networks (SAMINs), M MASSs are grouped under M UAVs, with all UAVs and a LEO satellite equipped with edge servers. MASSs can split their computation tasks, offloading portions simultaneously to both the serving UAV and the LEO satellite via OFDMA, under slotted-time operation. Control signaling involves MASSs sending offloading requests, UAVs updating their resource status to the satellite, and the satellite broadcasting centralized offloading/resource-allocation assignments (Wang et al., 3 Dec 2025). Similarly, in C-MEC systems supporting Industrial IoT, UEs connect to both in-network computing nodes (COIN nodes, CNs) and a central MEC server, leveraging URLLC links. Digital Twin (DT) layers are established to maintain real-time replicas of device and resource states, enabling accurate latency evaluation and future resource predictions (Aliyu et al., 8 Apr 2024).

2. Offloading Scheme and Resource Variables

In double-edge scenarios, the offloading split is governed by key variables:

  • For each edge device, the total input bits SmnS_{mn} may be partitioned such that smns_{mn} bits are offloaded; within those, a fraction amna_{mn} is sent to the first edge (UAV, COIN node), and (1amn)(1-a_{mn}) to the second (LEO, MEC server).
  • CPU cycles per input bit and per-cycle energy coefficients are device-specific, demarcated as cmn,cm,cLc_{mn}, c_m, c^L, and Pmnl,PmnU,PmnLP^l_{mn}, P^U_{mn}, P^L_{mn}.
  • Constraints encompass CPU capacities (ρmax\rho^{\max} for UAVs/satellites/MEC), power and energy budgets, maximum tolerable delay (TmaxT^{\text{max}}), and physical coverage limits.
  • In C-MEC, offloading decision variables smjs_{mj}, offloading ratios λmk\lambda_{mk} (to COIN nodes) and αm\alpha_m (to MEC server), and CPU allocation shares βm\beta_m are jointly optimized, subject to resource and assignment constraints (Aliyu et al., 8 Apr 2024).

The table below summarizes key offloading variables in double-edge-assisted paradigms:

System Edge 1 (Local/Proximal) Edge 2 (Remote/Central) Offloading Partition
SAMIN UAV LEO Satellite amn,smna_{mn}, s_{mn}
C-MEC / COIN COIN node (CN) MEC server (ES) λmk,αm\lambda_{mk}, \alpha_m

3. Problem Formulation: Joint Optimization Under Constraints

The goal is to minimize total system energy under latency, resource, and operational constraints. For SAMINs, the objective is:

min{amn,smn,ρmnU,ρmnL}Etot=m,nEmntot\min_{\{a_{mn}, s_{mn}, \rho^U_{mn}, \rho^L_{mn}\}} E^{\rm tot}=\sum_{m,n}E_{mn}^{\rm tot}

where EmntotE_{mn}^{\rm tot} aggregates local and offloading communication/computation energy, constrained by delay, CPU, distance, power, and energy budgets. The main equations involve:

  • Link rate via Shannon’s formula, e.g., RmnUR_{mn}^U and RmnLR_{mn}^L.
  • Transmission time and communication energy for each path.
  • Computation delay and energy for local, UAV, and satellite computations.
  • End-to-end delay and total per-device energy, TmntotT_{mn}^{\rm tot} and EmntotE_{mn}^{\rm tot}.

For C-MEC, the utility of UE mm is given as:

Um(s,Φ)=jK{0}smj[gt(TmemTme2e)pjΦjCm]U_m(s, \Phi) = \sum_{j\in\mathcal{K}\cup\{0\}} s_{mj} [ g_t(T_m^{em}-T_m^{e2e}) - p_j\Phi_jC_m ]

subject to assignment, delay, and CPU allocation constraints. The end-to-end latency and execution cost are tightly coupled to partial offloading ratios and resource shares (Aliyu et al., 8 Apr 2024).

4. Solution Methods: Alternating and Distributed Optimization

In SAMIN contexts, the optimization problem’s non-convexity (arising from the coupling and mixed integer-continuous offloading splits) is addressed via an Alternating Optimization (AO) procedure, decomposed into two layers:

  • Layer 1: Offloading mode {amn}\{a_{mn}\} and volume {smn}\{s_{mn}\}, solved for convex subproblems by multi-round iterative search (MRIS).
  • Layer 2: Computation resource allocation {ρmnU,ρmnL}\{\rho^U_{mn},\rho^L_{mn}\}, solved by convex optimization and KKT conditions, yielding closed-form CPU assignments.

Iterative AO cycles converge in 5–10 iterations to the joint optimal offloading ratio and resource allocation (Wang et al., 3 Dec 2025).

In C-MEC/COIN, a low-complexity distributed offloading scheme leverages game theory, modeling the assignment as an exact potential game (EPG), guaranteeing existence and attainability of Nash equilibria via better-response updates. For dynamic, proactive decision making (including resource allocation and offloading ratio prediction), a Double Deep Q-Network (DDQN) is deployed, operating over real and simulated digital twin states. The DDQN refines online and target Q-networks, using experience replay and discounted rewards to provide robust, latency-optimal offloading policies, outperforming random strategies and pure MEC baselines (Aliyu et al., 8 Apr 2024).

5. Performance Analysis and Benchmarks

Simulated SAMIN deployments (1 LEO, 4 UAVs, 5 MASSs/UAV) with real-world settings (C-band 12 MHz, Ka-band 15 MHz, 10310^3 CPU cycles/bit) reveal:

  • AO scheme converges efficiently (~5–10 iterations).
  • Longer transmission time reduces energy but increases latency; increased task volume shifts offloading to satellite (lower amna_{mn}).
  • The optimal solution minimizes energy for given channel/delay parameters, demonstrating 25–35% energy savings over equal-share (EOS) and 15–25% over local/entire-edge routing (POMT/EACR) across varying input sizes, device numbers, and CPU resources (Wang et al., 3 Dec 2025).

C-MEC benchmarks (DDQN-EPG, EPG-Rand, MEC) show DDQN-EPG achieving up to 20% higher utility, with utility improvements of 36–87% across tasks and an improvement of 47–64% in aggregate utility versus baselines as the number of UEs and CNs increases (Aliyu et al., 8 Apr 2024).

6. System-Level Insights and Design Guidelines

Double-edge offloading balances edge proximity (for latency) and central resource abundance (for heavy/overflow workloads). UAVs or COIN nodes handle low-latency, lightweight tasks, while satellites or MEC servers absorb large-size tasks or when local edge capacity is exhausted. The joint optimization framework dynamically tunes offloading fractions and resource allocations based on channel states, coverage times, and mobility, yielding robust operation under highly variable conditions.

Recommended design paths include:

  1. Sizing edge-server CPU in proportion to anticipated local load.
  2. Reserving central/server resources for peak or overflow situations.
  3. Dynamically adapting offloading fractions and resource allocations guided by channel and coverage state.
  4. Employing fast, iterative update schemes (AO, DDQN) for mobility and demand tracking.

A plausible implication is substantial energy savings and latency improvements for mission-critical, resource-constrained, multi-tier networks targeted for 6G deployments or industrial IoT settings.

7. Relation to Broader Research Directions

The double-edge paradigm generalizes partial and hybrid offloading concepts seen in multi-access edge computing and in-network computation, advancing from single-resource schemes to collaborative resource partitioning for energy and delay minimization. Integration with digital twin frameworks and reinforcement learning-driven optimization (DDQN) signals convergence with cyber-physical system orchestration and intelligent resource management under strict QoS and user utility constraints. Continued research may focus on expanding solution frameworks toward multi-agent, multi-objective settings and integrating security, robustness, and adaptive learning to further enhance system utility and resilience (Wang et al., 3 Dec 2025, Aliyu et al., 8 Apr 2024).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Space-Air-Marine Integrated Networks (SAMINs).