Edge-Cloud Cooperative Positioning
- The paper demonstrates that neural compression and quantization techniques enable a reduction of fronthaul usage to as little as 6.25%, while maintaining sub-meter mean 3D error.
- The framework leverages distributed edge base stations for CSI acquisition and cloud-based transformer-LSTM fusion to produce precise 3D localization in urban NLOS environments.
- A two-stage training approach, combining self-supervised pretraining and end-to-end optimization, ensures robust positioning performance despite high fronthaul constraints.
Edge-cloud cooperative positioning frameworks enable high-precision, three-dimensional positioning in dense urban non-line-of-sight (NLOS) environments by leveraging distributed processing at edge base stations (BSs) and centralized fusion at a cloud-based central unit (CU). The paradigm is driven by the need to jointly utilize channel fingerprinting measurements from multiple spatially distributed BSs, while confronting the prohibitive fronthaul overhead associated with forwarding raw channel state information (CSI) from all sites to a central processor. Recent advances deploy neural compression and quantization at the edge, enabling efficient fronthaul utilization without a significant degradation in positioning accuracy (An et al., 31 Jan 2026).
1. System Architecture and Operational Data Flow
In the fronthaul-efficient distributed cooperative 3D positioning architecture, each BS acquires uplink pilot signals over time slots, OFDM subcarriers, and antennas, yielding an estimated CSI tensor per snapshot. This CSI is phase-stabilized, gain-normalized, and constructed as a real-valued matrix suitable for machine learning processing.
A neural encoder transforms into a frequency-aligned latent embedding , which is vectorized into with . A uniform mid-rise quantizer with bits per coefficient produces a bitstream (), adhering to the fronthaul capacity constraint . Optionally, the normalized Frobenius gain is quantized and transmitted as side information.
The CU receives quantized embeddings and gain scalars , jointly decodes latents via , and applies a fusion-then-regression network to predict the 3D position .
2. Quantization and Fronthaul Compression Methodology
Quantization of edge embeddings follows a uniform mid-rise strategy, defined for a latent scalar as: with chosen from training-set statistics.
The bit budget for forwarding full (lossless) CSI is , assuming complex64 encoding. In contrast, quantizing the embedding reduces the payload to . The fronthaul reduction ratio is: This approach can achieve compression to as little as of the full lossless link, with empirically verified sub-meter positioning accuracy (An et al., 31 Jan 2026).
3. Cloud-Side Cooperative Fusion and Regression
At the CU, decoded embeddings are aggregated via channel-masked attention (CMA), where each BS's contribution is adaptively weighted according to normalized signal gain via , and a modified Transformer-style attention operator. Subsequently, the frequency-wise fused tokens across subcarriers are input sequentially to an LSTM: . Each LSTM state produces an intermediate 3D estimate , with the final position set as .
The CU network is trained to minimize the end-to-end positioning loss: or, in variants, a weighted MSE across all subcarriers.
4. Training Strategy: Self-Supervised Pretraining and End-to-End Optimization
The framework adopts a two-stage training regimen:
Stage I (local pretraining): Each BS autoencoder () is trained on unlabeled CSI snapshots to minimize the negative cosine similarity between the original and reconstructed CSI vectors: with for numerical stability. The decoder is discarded after pretraining, but the encoder is retained for downstream use.
Stage II (joint end-to-end training): With labeled CSI–position pairs , each BS generates quantized embeddings; the CU fusion network produces . The straight-through estimator (STE) is used to approximate gradients through the quantizer for joint parameter updates: Both BS and CU parameters are updated to minimize the positioning loss .
5. Performance Evaluation and Trade-offs
Experiments using a 3.5 GHz, 20 MHz urban ray-tracing simulation with 6 BSs, 10 pilot slots, 8 antennas, and 24 subcarriers () yield the following:
| Configuration | Mean 3D Error (m) | 90%-ile Error (m) | Fronthaul Usage (%) |
|---|---|---|---|
| Full CSI forwarding | 0.42 | 0.75 | 100 |
| Quantized embedding (Q=10 bits) | 0.48 | 0.83 | 6.25 |
| Quantized embedding (Q=8 bits) | 0.52 | 0.90 | 5 |
| Quantized embedding (Q=4 bits) | 0.56 | 0.97 | 2.5 |
These findings indicate that fronthaul-efficient neural embedding methods can retain near-baseline positioning performance, with mean 3D errors remaining sub-meter even at drastic fronthaul compression levels (An et al., 31 Jan 2026). A plausible implication is that further trade-offs are possible by tuning quantizer parameters and latent dimension, subject to application requirements.
6. Context, Related Work, and Implications
Edge-cloud cooperative positioning in dense urban NLOS environments is motivated by the infeasibility of raw CSI transmission due to bandwidth limitations. Channel fingerprinting fusion from multiple BSs offers robust 3D localization by leveraging spatial diversity. The use of neural codecs for CSI compression and quantization distinguishes recent frameworks from traditional approaches reliant on handcrafted feature extraction or fixed quantization.
The two-stage training design aligns with trends in large-scale distributed deep learning, where local representation learning is leveraged before joint centralized finetuning. The incorporation of Transformer-style attention and LSTM-based sequence models for fusing per-subcarrier tokens reflects advances in both signal processing and modern neural architecture design.
The primary result is that edge-quantized CSI embeddings, jointly processed at the cloud, enable near-optimal cooperative positioning with extreme fronthaul reduction. This suggests practical viability for deployment in 5G/6G networks and paves the way for scalable cooperative positioning in resource-constrained fronthaul scenarios (An et al., 31 Jan 2026).