Sparse Code Multiple Access (SCMA)
- SCMA is a non-orthogonal multiple access scheme that maps user data onto sparse, multi-dimensional codewords, enabling massive connectivity.
- It employs optimized codebooks and factor graph-based message passing algorithms to balance throughput, latency, and receiver complexity.
- SCMA is adaptable to various frameworks such as grant-free access, OTFS modulation, and adaptive modulation for future wireless applications.
Sparse Code Multiple Access (SCMA) is a code-domain non-orthogonal multiple access (NOMA) scheme that enables massive connectivity and high spectral efficiency in 5G and future wireless networks. SCMA achieves this by mapping user data directly onto sparse, multi-dimensional codewords drawn from user-specific codebooks, allowing multiple users to share physical resources in a highly overloaded manner while maintaining tractable receiver complexity via message passing detection that exploits codeword sparsity. SCMA supports flexible system trade-offs between throughput, latency, receiver complexity, and reliability and is extensible to hybrid NOMA frameworks and next-generation wireless contexts.
1. Fundamentals and System Model
SCMA generalizes CDMA by mapping bits into -dimensional sparse codewords with only nonzero entries, rather than spreading symbols along dense sequences. For an uplink/downlink block, users (layers) simultaneously transmit on orthogonal resource elements (REs), yielding an overloading factor (Taherzadeh et al., 2014, Wei et al., 2020, Yu et al., 2021). Each user uses a codebook of codewords, where each codeword is assigned a specific 'support pattern': a binary indicator that specifies the nonzero positions.
The received signal model is:
where denotes the channel gains for user and is additive white Gaussian noise. In the multiuser scenario, the system is succinctly represented by a bipartite factor graph, where user nodes (VNs) are connected to resource nodes (FNs) via edges defined by the codeword sparsity (Taherzadeh et al., 2014, Chaturvedi et al., 2022).
2. SCMA Codebook Design and Factor Graph Construction
Central to SCMA is the design of multi-dimensional sparse codebooks that maximize system performance, particularly under realistic channel conditions (Wei et al., 2020, Lei et al., 2024, Li et al., 2020, Taherzadeh et al., 2014). Each user's codebook is constructed via permutation and rotation of a lattice-based or constellation-based mother codebook (e.g., rotated QAM, Star-QAM, low-projection-PAM). The codebook design objectives include:
- Maximizing minimum Euclidean distance among all superimposed codewords to minimize pairwise error probability, significant at moderate SNR.
- Maximizing minimum product distance for fading/ergodic channels to ensure diversity (Li et al., 2020, Lei et al., 2024).
- Shaping gain: Utilizing non-cubic, rotationally-optimized constellations for improved performance over simple QAM repetition (as in LDS).
- Control of codeword sparsity: The support pattern matrix is constructed so that each column (user) has ones, and each row (RE) has ones, where (Yu et al., 2021, Wei et al., 2020, Taherzadeh et al., 2014).
- Power imbalance: Introducing varying energy per nonzero dimension to further maximize the minimum distance in the superimposed constellation, especially in downlink (Li et al., 2020).
Optimization methods for codebook construction include sequential quadratic programming (Lei et al., 2024), genetic algorithms (Li et al., 2020), differential evolution (Deka et al., 2020), and deep-learning-based autoencoders (Luo et al., 2022, Lin et al., 2019). The factor graph defines the message passing schedule in the receiver and governs user-per-resource collision properties.
3. Message Passing Algorithms and Detection
Multiuser detection in SCMA leverages the sparsity of codewords by employing the message passing algorithm (MPA) over the constructed factor graph (Taherzadeh et al., 2014, Wei et al., 2020, Chaturvedi et al., 2022). Belief propagation alternates between:
- FN-to-VN update (function/resource node to user node): For each resource, the message is a sum-product (or max-product in the log-domain) marginalization over all possible symbol combinations for users colliding on that RE, weighted by the observation likelihood and prior messages.
- VN-to-FN update: For each user, the outgoing message is the product of incoming messages (excluding the current resource node), reflecting extrinsic information.
The per-iteration complexity is , where is the per-resource degree, and is the constellation size. For moderate and , the complexity is feasible even with overloading.
Variants to further reduce complexity include:
- List sphere decoding (LSD): Replaces full marginalization with a sphere search in the lattice space, with pruning and complexity tunability (Wei et al., 2020).
- Max-Log and log-domain approximations: Substitute sum-exponentials in MPA with max-operations and limited Jacobian corrections (Zhang et al., 2018, Xiao et al., 2015).
- Early termination, schedule-based, and variable alphabet truncation (Zhang et al., 2018, Chaturvedi et al., 2022).
- Deep learning-based decoders: DNNs or autoencoders can learn SCMA decoding and sometimes jointly optimize codebooks and decoding mappings, yielding improved BER-SNR efficiency (Luo et al., 2022, Lin et al., 2019).
4. System Performance, Overloading, and Hybrid SCMA-NOMA
SCMA achieves high spectral efficiency and user connectivity:
- Overloading: Systems routinely support in range (i.e., serving 150–200% as many users as physical resources). For example, 6 users over 4 dimensions, with , (Taherzadeh et al., 2014, Yu et al., 2021, Sharma et al., 2023). Spectral efficiency scales with overloading as bits/s/Hz.
- Error-rate performance: Well-designed codebooks yield shaping gains of 1–2 dB over LDS for the same loading and up to 3 dB over OFDMA/SC-FDMA for BLER in the range (Taherzadeh et al., 2014, Yu et al., 2021).
- Hybrid access: SCMA is synergistically combined with power-domain NOMA (HMA), multi-user CoMP, and other NOMA/OMA coexistence mechanisms for massive user support and flexible rate adaptation (Sharma et al., 2023, Nikopour et al., 2014, Luo et al., 2024, Vilaipornsawai et al., 2015).
- Phase noise: Phase-noise-resilient codebook designs (via optimized metrics such as MPNM) mitigate PN-induced detection errors, increasing robustness under practical RF conditions (Liu et al., 28 Jan 2025).
5. Advanced Frameworks: Grant-Free Access, OTFS-SCMA, and Adaptive Modulation
SCMA is deployed in grant-free massive access, 6G architectures, and high-mobility contexts:
- Grant-Free SCMA: Designed for uplink mMTC, where user activity is random and unknown a priori. Joint belief propagation and expectation propagation enable joint activity detection and decoding, often resolving resource collisions with ACK-feedback loops and iterative remapping (Wei et al., 2020).
- OTFS-SCMA: SCMA codebooks are applied as inner code to OTFS modulation for robust high-mobility communications. Channel estimation via convolutional sparse coding exploits codebook-induced pilot sparsity, allowing scalable estimation overhead independent of user count (Thomas et al., 2021).
- Variable/Adaptive Modulation SCMA (VM-SCMA/AVM-SCMA): Permits per-user modulation order/codebook size, adaptive to user channel conditions and targeted throughput. Optimization jointly addresses codebook assignment, power, and mapping under average inverse product distance and throughput bounds (Luo et al., 2024).
- Visible-Light and mmWave SCMA: Extensions to real-valued, nonnegative or beam-space codewords for optical or directional mmWave networks (Yu et al., 2021).
6. Hardware Architectures and Implementation
The structured sparsity of SCMA codebooks and message passing makes it amenable to low-complexity hardware implementation:
- Deterministic message passing (DMPA): VLSI architectures exploit pipelining and folding to achieve multi-Gbps throughput with sub-10 μs latency for moderate-size systems (e.g., , ), meeting 3GPP eMBB requirements (Zhang et al., 2018). Max-Log and approximation techniques permit arithmetic reduction by factors exceeding versus full floating-point MPA.
- Parallelization: The resource and layer node computations map to pipeline and SIMD structures.
- Early termination, damping, domain-specific acceleration: These further minimize energy and computation for wireless hardware.
7. Research Directions and Open Challenges
Continued research focuses on multiple axes:
- Large-scale codebook optimization: For high-dimensional SCMA, scalable combinatorial or ML-based search for optimal codebooks remains open (Chaturvedi et al., 2022, Luo et al., 2022).
- Unified blind and grant-free detection: Integrating activity, channel, and data inference in compressed-sensing–like or probabilistic frameworks (Wei et al., 2020).
- Synchronization and asynchrony: Robust MPA against timing/frequency offsets and phase noise (Liu et al., 28 Jan 2025, Chaturvedi et al., 2022).
- Cross-layer resource allocation: Dynamic codebook/user-layer mapping, scheduling, and joint SCMA/OMA operations under QoS and latency constraints (Nikopour et al., 2014, Luo et al., 2024).
- Deep learning integration: Large-scale, online adaptive autoencoder codebook/decoder pairs for evolving channel, interference, and hardware conditions (Luo et al., 2022, Lin et al., 2019).
- Grant-free and RIS-aided extensions: Design under intelligent reflecting surfaces and ultra-dense, distributed architectures (Yu et al., 2021).
- SCMA for high-mobility and new physical layers: SCMA-OTFS, visible light, and mmWave combinations for vehicular, IoT, and beyond-5G/6G scenarios (Yu et al., 2021, Thomas et al., 2021).
SCMA thus constitutes a principal code-domain NOMA solution for future wireless, with robust mathematical underpinnings, demonstrated practical performance, and a rich suite of open algorithmic and implementation research problems (Taherzadeh et al., 2014, Wei et al., 2020, Yu et al., 2021, Wei et al., 2020, Luo et al., 2022, Chaturvedi et al., 2022, Luo et al., 2024, Zhang et al., 2018, Xiao et al., 2015, Li et al., 2020, Lei et al., 2024, Liu et al., 28 Jan 2025, Thomas et al., 2021).