Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 87 tok/s
Gemini 2.5 Pro 60 tok/s Pro
GPT-5 Medium 28 tok/s
GPT-5 High 28 tok/s Pro
GPT-4o 81 tok/s
GPT OSS 120B 453 tok/s Pro
Kimi K2 229 tok/s Pro
2000 character limit reached

A Unified Transformer Architecture for Low-Latency and Scalable Wireless Signal Processing (2508.17960v1)

Published 25 Aug 2025 in eess.SP

Abstract: We propose a unified Transformer-based architecture for wireless signal processing tasks, offering a low-latency, task-adaptive alternative to conventional receiver pipelines. Unlike traditional modular designs, our model integrates channel estimation, interpolation, and demapping into a single, compact attention-driven architecture designed for real-time deployment. The model's structure allows dynamic adaptation to diverse output formats by simply modifying the final projection layer, enabling consistent reuse across receiver subsystems. Experimental results demonstrate strong generalization to varying user counts, modulation schemes, and pilot configurations, while satisfying latency constraints imposed by practical systems. The architecture is evaluated across three core use cases: (1) an End-to-End Receiver, which replaces the entire baseband processing pipeline from pilot symbols to bit-level decisions; (2) Channel Frequency Interpolation, implemented and tested within a 3GPP-compliant OAI+Aerial system; and (3) Channel Estimation, where the model infers full-band channel responses from sparse pilot observations. In all cases, our approach outperforms classical baselines in terms of accuracy, robustness, and computational efficiency. This work presents a deployable, data-driven alternative to hand-engineered PHY-layer blocks, and lays the foundation for intelligent, software-defined signal processing in next-generation wireless communication systems.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube