Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 105 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 45 tok/s
GPT-5 High 34 tok/s Pro
GPT-4o 108 tok/s
GPT OSS 120B 473 tok/s Pro
Kimi K2 218 tok/s Pro
2000 character limit reached

PUSCH Processing in Uplink Communications

Updated 19 August 2025
  • PUSCH processing is a multi-stage physical layer operation that enables uplink communication through modulation, coding, and waveform mapping.
  • It integrates sophisticated channel estimation, adaptive link adaptation, and hardware acceleration to optimize spectral efficiency and reduce PAPR.
  • Emerging AI-driven and neural receiver architectures further enhance processing efficiency for massive MIMO and multiuser environments.

Physical Uplink Shared Channel (PUSCH) processing refers to the set of baseband and radio frequency signal processing operations employed in cellular uplink communications, in which a user equipment (UE) transmits user data over the PUSCH to the network (e.g., eNodeB in LTE-A, gNB in NR). PUSCH processing encompasses transmission-side procedures (modulation, coding, precoding, SC-FDM/OFDM, pilot insertion), propagation effects (fading, mobility, noise), and receiver-side operations (synchronization, channel estimation, equalization, demodulation, decoding, link adaptation, resource allocation). The complexity of these stages, especially under stringent time and energy constraints for massive MIMO, edge deployments, and multiuser support, has driven extensive research in both traditional and AI-driven physical layer designs.

1. End-to-End Physical Layer Processing Chains

PUSCH transmission begins with channel coding—LDPC in NR, Turbo codes in LTE-A—followed by rate matching, CRC attachment, bit interleaving, and scrambling. Modulation maps bits to complex constellation symbols; typical schemes include QPSK, 16-QAM, 64-QAM, and 256-QAM. Demodulation Reference Signals (DMRS) are inserted for coherent detection, based on Zadoff–Chu sequences with layer-orthogonal cyclic shifts in LTE-A (Zöchmann et al., 2015), or flexible pilot patterns in NR (Cisek et al., 2019). The precoding stage uses codebook-based matrix selection to exploit MIMO spatial diversity.

The physical signal is mapped onto resource elements (REs), multiplexed via SC-FDMA in LTE-A or OFDMA in NR. SC-FDMA uses DFT-spreading before IFFT and CP addition, yielding lower peak-to-average power ratio (PAPR) for uplink (Zöchmann et al., 2015, Yli-Kaakinen et al., 2017). Receiver-side operations are performed in the reverse order: cyclic prefix removal, FFT, spatial combining (MRC, MMSE), and per-subcarrier, per-layer channel equalization.

A detailed mathematical formulation for SC-FDM uplink rate is

RSCFDM=NSCl=1Llog2(1+SINRSCFDM,(l)),R^{\mathrm{SC-FDM}} = N_{SC} \sum_{l=1}^{L} \log_2(1 + \operatorname{SINR}^{\mathrm{SC-FDM},(l)}),

where SINRSCFDM,(l)\operatorname{SINR}^{\mathrm{SC-FDM},(l)} is post-equalization SINR for layer ll, explicitly capturing DFT-spreading averaging:

$\operatorname{SINR}^{\mathrm{SC-FDM},(l)} = \frac{ \frac{\sigma_x^2}{N_{SC}} \left| \mathds{1}_{N_{SC}}^T S^{(l)} d(F H_{\mathrm{eff}}) \right|^2 }{ \sigma_x^2 \| S^{(l)} F H_{\mathrm{eff}} \|_F^2 - \frac{\sigma_x^2}{N_{SC}} \left| \mathds{1}_{N_{SC}}^T S^{(l)} d(F H_{\mathrm{eff}}) \right|^2 + \sigma_n^2 \| S^{(l)}F \|_F^2 }$

(Zöchmann et al., 2015). For OFDM, SINR is evaluated per subcarrier.

2. Waveform Design and Filtering

SC-FDMA, mandated for LTE-A uplink, reduces PAPR via DFT-spreading, critical for battery-powered UEs. The Vienna LTE-A simulator models PAPR for discrete-time baseband signals with

PAPR{stx}NTNFFTd(stxstxH)stx22\operatorname{PAPR}\{s_{\rm tx}\} \approx \frac{N_T N_{\mathrm{FFT}}\,\|\,d(s_{\rm tx} s_{\rm tx}^H)\|_\infty}{\|s_{\rm tx}\|_2^2}

(Zöchmann et al., 2015).

In 5G NR, FC-F-OFDM and other subband-filtered CP-OFDM schemes have emerged (Yli-Kaakinen et al., 2017). Fast convolution employs efficient frequency-domain filtering that suppresses out-of-band emissions, supports asynchronous operation, and allows for independent numerologies. Each subband signal is filtered using a diagonal weighting matrix and mapped in frequency, permitting high spectral localization with minimal guardbands:

Fm,r=SNWN1Mm,rDmPLmLm/2WLmF_{m, r} = S_N W_N^{-1} M_{m, r} D_m P^{L_m/2}_{L_m} W_{L_m}

Optimization minimizes worst-case passband EVM and constrains stopband attenuation. FC filtering is computationally efficient and allows independent TX/RX deployments in multiuser uplink settings (Yli-Kaakinen et al., 2017).

3. Channel Estimation and Reference Signal Design

Channel estimation is crucial for coherent demodulation and MIMO detection. LTE-A and NR employ DMRS, typically Zadoff–Chu-based, with properties:

rˉk=1,R(l)=T(l)rˉ|\bar{r}_k| = 1,\quad R^{(l)} = T^{(l)}\bar{r}

Orthogonality across layers:

(R(l))HR(u)={NSC,u=l 0,ul(R^{(l)})^H R^{(u)} = \begin{cases} N_{SC},\quad u = l \ 0,\quad u \neq l\end{cases}

(Zöchmann et al., 2015). At the receiver, matched filtering is used:

h~(i,l)=(R(l))Hy(i)\tilde{h}^{(i, l)} = (R^{(l)})^H y^{(i)}

Post-correlation, DFT-based windowing and smoothing methods (e.g., quadratic smoothing) reduce estimator MSE and inter-layer interference.

Advanced architectures (NR and edge-AI) use hybrid LS/MMSE channel estimation, cubic spline interpolation for pilot gen, and iterative data-aided (DA-LS) correction (Cisek et al., 2019, Abdollahpour et al., 18 Aug 2025). Model-driven neural receivers incorporate learnable positional encoding to refine channel knowledge and suppress residual interference (Abdollahpour et al., 18 Aug 2025).

4. Equalization, Demodulation, and Detection

Equalization compensates for channel and MIMO effects and is performed via ZF, MMSE, or neural techniques. MMSE detection for MIMO is formulated as

x^k,s=(H^k,sHH^k,s+σk,s2I)1H^k,sHyk,s\hat{x}_{k, s} = (\hat{H}_{k, s}^H \hat{H}_{k, s} + \sigma_{k, s}^2 I)^{-1} \hat{H}_{k, s}^H y_{k, s}

(Bertuletti et al., 8 Aug 2025). Demodulation employs LLR computation for soft input to LDPC decoders:

L(b)1σ2[mins0S0xs02mins1S1xs12]L(b) \approx -\frac{1}{\sigma^2} [\min_{s_0\in S_0}|x - s_0|^2 - \min_{s_1\in S_1}|x - s_1|^2]

(Cisek et al., 2019). Neural receivers leverage convolutional and message-passing modules to jointly perform channel estimation, equalization, and demapping, achieving competitive TBLER with substantial complexity reduction (Cammerer et al., 2023, Abdollahpour et al., 18 Aug 2025).

Adapting MCS, precoding, and rank to channel quality is central to efficient uplink operation. Algorithms estimate resource block mutual information (MI), rank indicator (RI), and TPMI, optimizing sum rate:

W^(L)=argmaxWWLl=1Lf(SINRSCFDM,(l)(W))\hat{W}(L) = \arg\max_{W \in \mathcal{W}_L} \sum_{l=1}^L f(\operatorname{SINR}^{\mathrm{SC-FDM},(l)}(W))

(Zöchmann et al., 2015). In scenarios with power density offsets (PDO) between PUSCH and SRS, link adaptation algorithms scale the channel matrix to estimate MI at multiple PDO reference points, interpolating for actual PDOs to select the best MCS (Sun et al., 2020). This approach yields notable BLER and throughput improvements across wide PDO/SNR ranges.

Multiuser and multi-base-station scenarios motivate advanced scheduling, coordinated multipoint reception (CoMP), and full 3D channel model support; future research is extending simulation environments for these (Zöchmann et al., 2015).

6. Hardware Acceleration and Software-Defined Implementations

Large-scale PUSCH processing is computationally intensive, requiring acceleration strategies. GPU-based designs parallelize LS channel estimation and antenna combining (MRC), exploiting thread/block architectures to accelerate processing for massive MIMO (Gokalgandhi et al., 2019). Many-core RISC-V clusters (MemPool, TeraPool) with 256/1024 cores and shared L1 memory parallelize FFT, matrix-matrix multiplication, and Cholesky-based matrix decomposition, achieving speedups up to 880× and meeting strict latency (sub-millisecond) requirements (Bertuletti et al., 2022). Domain-specific FP extensions further enhance throughput and efficiency (e.g., 66 Gbps/5.5 W for full PUSCH) (Bertuletti et al., 8 Aug 2025). Fork–join SPMD scheduling and memory folding techniques optimize data flow and minimize bank conflicts.

AI-assisted receivers incorporate model-driven neural architectures for channel estimation and edge deployment, reducing FLOPs and memory requirements by large factors relative to prior deep learning approaches (e.g., 66× fewer FLOPs, 396× fewer parameters for MU-MIMO receivers) (Abdollahpour et al., 18 Aug 2025).

7. Advanced Applications and Emerging Directions

PUSCH channels are increasingly leveraged for integrated sensing and communication (ISaC). In bistatic 5G NR ISaC, the receiver reuses DMRS and decoded data REs to estimate target delay and Doppler via maximum likelihood methods, with Fisher-Information-based CRLBs characterizing estimation accuracy. The tradeoff between pilot allocation and data throughput is analytically quantified (Gangula et al., 18 May 2025). Multiuser Uplink in doubly-spread channels is addressed by Zak-OTFS, a DD-domain modulation framework enabling non-overlapping, flexible TF resource assignment without guard bands. Zak-OTFS’s predictable input-output relation, minimal multiuser interference, and robust channel estimation—characterized by negligible interference leakage and near single-user BER—make it an attractive candidate for future high-mobility, high-capacity uplink processing (Khan et al., 21 Jul 2025).

Non-terrestrial networks (NTN) and satellite return links show that 5G NR PUSCH outperforms legacy DVB-RCS2 in spectral efficiency and throughput due to dynamic resource allocation, relaxed BLER targets, and efficient scheduling—contrasting sharply with static frame, strict SINR-to-FER DVB-RCS2 designs (Sormunen et al., 19 Feb 2025).


Physical Uplink Shared Channel (PUSCH) processing encompasses a highly optimized, multi-stage physical layer chain whose evolution—driven by SC-FDMA/OFDM waveform engineering, advanced channel estimation, flexible link adaptation, and scalable hardware/software acceleration—enables robust, high-throughput uplink communication across terrestrial, satellite, and emerging sensing domains. The integration of AI-driven edge receivers and advanced non-orthogonal waveforms (like Zak-OTFS), coupled with resilience enhancements against adversarial attacks and dynamic scheduling frameworks, continues to set new standards for uplink performance in next-generation cellular networks.