Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

High-Throughput VLSI Architecture for GRAND (2007.07328v1)

Published 14 Jul 2020 in cs.IT, cs.AR, and math.IT

Abstract: Guessing Random Additive Noise Decoding (GRAND) is a recently proposed universal decoding algorithm for linear error correcting codes. Since GRAND does not depend on the structure of the code, it can be used for any code encountered in contemporary communication standards or may even be used for random linear network coding. This property makes this new algorithm particularly appealing. Instead of trying to decode the received vector, GRAND attempts to identify the noise that corrupted the codeword. To that end, GRAND relies on the generation of test error patterns that are successively applied to the received vector. In this paper, we propose the first hardware architecture for the GRAND algorithm. Considering GRAND with ABandonment (GRANDAB) that limits the number of test patterns, the proposed architecture only needs $2+\sum_{i=2}{n} \left\lfloor\frac{i}{2}\right\rfloor$ time steps to perform the $\sum_{i=1}3 \binom{n}{i}$ queries required when $\text{AB}=3$. For a code length of $128$, our proposed hardware architecture demonstrates only a fraction ($1.2\%$) of the total number of performed queries as time steps. Synthesis result using TSMC 65nm CMOS technology shows that average throughputs of $32$ Gbps to $64$ Gbps can be achieved at an SNR of $10$ dB for a code length of $128$ and code rates rate higher than $0.75$, transmitted over an AWGN channel. Comparisons with a decoder tailored for a $(79,64)$ BCH code show that the proposed architecture can achieve a slightly higher average throughput at high SNRs, while obtaining the same decoding performance.

Citations (40)

Summary

We haven't generated a summary for this paper yet.