Papers
Topics
Authors
Recent
Search
2000 character limit reached

Edge-Cloud Cooperative Positioning

Updated 7 February 2026
  • The paper demonstrates that neural compression and quantization techniques enable a reduction of fronthaul usage to as little as 6.25%, while maintaining sub-meter mean 3D error.
  • The framework leverages distributed edge base stations for CSI acquisition and cloud-based transformer-LSTM fusion to produce precise 3D localization in urban NLOS environments.
  • A two-stage training approach, combining self-supervised pretraining and end-to-end optimization, ensures robust positioning performance despite high fronthaul constraints.

Edge-cloud cooperative positioning frameworks enable high-precision, three-dimensional positioning in dense urban non-line-of-sight (NLOS) environments by leveraging distributed processing at edge base stations (BSs) and centralized fusion at a cloud-based central unit (CU). The paradigm is driven by the need to jointly utilize channel fingerprinting measurements from multiple spatially distributed BSs, while confronting the prohibitive fronthaul overhead associated with forwarding raw channel state information (CSI) from all sites to a central processor. Recent advances deploy neural compression and quantization at the edge, enabling efficient fronthaul utilization without a significant degradation in positioning accuracy (An et al., 31 Jan 2026).

1. System Architecture and Operational Data Flow

In the fronthaul-efficient distributed cooperative 3D positioning architecture, each BS i∈{1,…,L}i \in \{1, \dots, L\} acquires uplink pilot signals over TT time slots, NscN_{sc} OFDM subcarriers, and NrN_r antennas, yielding an estimated CSI tensor Hi∈CT×Nr×Nsc\mathbf{H}_i \in \mathbb{C}^{T \times N_r \times N_{sc}} per snapshot. This CSI is phase-stabilized, gain-normalized, and constructed as a real-valued matrix Xi∈R(TNr)×Nsc×2\mathbf{X}_i \in \mathbb{R}^{(T N_r) \times N_{sc} \times 2} suitable for machine learning processing.

A neural encoder fe,i(⋅;θi)f_{e,i}(\cdot;\theta_i) transforms Xi\mathbf{X}_i into a frequency-aligned latent embedding Zi∈RNsc×dz\mathbf{Z}_i \in \mathbb{R}^{N_{sc} \times d_z}, which is vectorized into zi∈RD\mathbf{z}_i \in \mathbb{R}^{D} with D=NscdzD = N_{sc} d_z. A uniform mid-rise quantizer Q(⋅)Q(\cdot) with QQ bits per coefficient produces a bitstream bi∈{0,1}B\mathbf{b}_i \in \{0,1\}^{B} (B=DQB = D Q), adhering to the fronthaul capacity constraint B≤CfronthaulB \leq C_{\rm fronthaul}. Optionally, the normalized Frobenius gain gig_i is quantized and transmitted as side information.

The CU receives quantized embeddings {bi}i=1L\{\mathbf{b}_i\}_{i=1}^L and gain scalars {gi}\{g_i\}, jointly decodes latents {z^i}\{\widehat{\mathbf{z}}_i\} via fd(â‹…;Ï•)f_d(\cdot; \phi), and applies a fusion-then-regression network g(â‹…)g(\cdot) to predict the 3D position p^\widehat{\mathbf{p}}.

2. Quantization and Fronthaul Compression Methodology

Quantization of edge embeddings follows a uniform mid-rise strategy, defined for a latent scalar yy as: Q(y)={+(2Q−1)Δ2,y>(2Q−1)Δ2 ⌊yΔ⌋Δ+Δ2,−(2Q−1)Δ2≤y≤(2Q−1)Δ2 −(2Q−1)Δ2,y<−(2Q−1)Δ2Q(y) = \begin{cases} +\frac{(2^Q-1)\Delta}{2}, & y>\frac{(2^Q-1)\Delta}{2} \ \left\lfloor \frac{y}{\Delta} \right\rfloor \Delta + \frac{\Delta}{2}, & -\frac{(2^Q-1)\Delta}{2} \leq y \leq \frac{(2^Q-1)\Delta}{2} \ -\frac{(2^Q-1)\Delta}{2}, & y<-\frac{(2^Q-1)\Delta}{2} \end{cases} with Δ\Delta chosen from training-set statistics.

The bit budget for forwarding full (lossless) CSI is BCSI=64TNrNscB_{\rm CSI} = 64 T N_r N_{sc}, assuming complex64 encoding. In contrast, quantizing the embedding reduces the payload to Bemb=DQB_{\rm emb} = D Q. The fronthaul reduction ratio is: η=BembBCSI\eta = \frac{B_{\rm emb}}{B_{\rm CSI}} This approach can achieve compression to as little as 6.25%6.25\% of the full lossless link, with empirically verified sub-meter positioning accuracy (An et al., 31 Jan 2026).

3. Cloud-Side Cooperative Fusion and Regression

At the CU, decoded embeddings are aggregated via channel-masked attention (CMA), where each BS's contribution is adaptively weighted according to normalized signal gain via mi=softmax(βgi)m_i = \mathrm{softmax}(\beta g_i), and a modified Transformer-style attention operator. Subsequently, the frequency-wise fused tokens across NscN_{sc} subcarriers {vn}\{\mathbf{v}_n\} are input sequentially to an LSTM: (sn,cn)=LSTM(vn,sn−1,cn−1)(s_n, c_n) = \mathrm{LSTM}(\mathbf{v}_n, s_{n-1}, c_{n-1}). Each LSTM state produces an intermediate 3D estimate p^n\widehat{\mathbf{p}}_n, with the final position set as p^=p^Nsc\widehat{\mathbf{p}} = \widehat{\mathbf{p}}_{N_{sc}}.

The CU network is trained to minimize the end-to-end positioning loss: Lpos=E[∥p^−p∥22]\mathcal{L}_{\rm pos} = \mathbb{E}[\|\widehat{\mathbf{p}} - \mathbf{p}\|_2^2] or, in variants, a weighted MSE across all subcarriers.

4. Training Strategy: Self-Supervised Pretraining and End-to-End Optimization

The framework adopts a two-stage training regimen:

Stage I (local pretraining): Each BS autoencoder (θi,ψi\theta_i, \psi_i) is trained on unlabeled CSI snapshots to minimize the negative cosine similarity between the original and reconstructed CSI vectors: Lcos=−∣⟨hCSI,h^CSI⟩∣∥hCSI∥2∥h^CSI∥2+ϵ\mathcal{L}_{\rm cos} = - \frac{|\langle h_{\rm CSI}, \widehat{h}_{\rm CSI} \rangle|} {\|h_{\rm CSI}\|_2 \|\widehat{h}_{\rm CSI}\|_2 + \epsilon} with ϵ>0\epsilon>0 for numerical stability. The decoder ψi\psi_i is discarded after pretraining, but the encoder θi\theta_i is retained for downstream use.

Stage II (joint end-to-end training): With labeled CSI–position pairs (Hi,p)(\mathbf{H}_i, \mathbf{p}), each BS generates quantized embeddings; the CU fusion network produces p^\widehat{\mathbf{p}}. The straight-through estimator (STE) is used to approximate gradients through the quantizer for joint parameter updates: ∂Q(y)∂y≈{1,∣y∣≤A 0,∣y∣>A\frac{\partial Q(y)}{\partial y} \approx \begin{cases} 1, & |y| \leq A \ 0, & |y| > A \end{cases} Both BS and CU parameters are updated to minimize the positioning loss Lpos\mathcal{L}_{\rm pos}.

5. Performance Evaluation and Trade-offs

Experiments using a 3.5 GHz, 20 MHz urban ray-tracing simulation with 6 BSs, 10 pilot slots, 8 antennas, and 24 subcarriers (dz=32d_z = 32) yield the following:

Configuration Mean 3D Error (m) 90%-ile Error (m) Fronthaul Usage (%)
Full CSI forwarding 0.42 0.75 100
Quantized embedding (Q=10 bits) 0.48 0.83 6.25
Quantized embedding (Q=8 bits) 0.52 0.90 5
Quantized embedding (Q=4 bits) 0.56 0.97 2.5

These findings indicate that fronthaul-efficient neural embedding methods can retain near-baseline positioning performance, with mean 3D errors remaining sub-meter even at drastic fronthaul compression levels (An et al., 31 Jan 2026). A plausible implication is that further trade-offs are possible by tuning quantizer parameters and latent dimension, subject to application requirements.

Edge-cloud cooperative positioning in dense urban NLOS environments is motivated by the infeasibility of raw CSI transmission due to bandwidth limitations. Channel fingerprinting fusion from multiple BSs offers robust 3D localization by leveraging spatial diversity. The use of neural codecs for CSI compression and quantization distinguishes recent frameworks from traditional approaches reliant on handcrafted feature extraction or fixed quantization.

The two-stage training design aligns with trends in large-scale distributed deep learning, where local representation learning is leveraged before joint centralized finetuning. The incorporation of Transformer-style attention and LSTM-based sequence models for fusing per-subcarrier tokens reflects advances in both signal processing and modern neural architecture design.

The primary result is that edge-quantized CSI embeddings, jointly processed at the cloud, enable near-optimal cooperative positioning with extreme fronthaul reduction. This suggests practical viability for deployment in 5G/6G networks and paves the way for scalable cooperative positioning in resource-constrained fronthaul scenarios (An et al., 31 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Edge-Cloud Cooperative Positioning Framework.