Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fulcrum Network Codes: A Code for Fluid Allocation of Complexity (1404.6620v2)

Published 26 Apr 2014 in cs.IT, cs.NI, and math.IT

Abstract: This paper proposes Fulcrum network codes, a network coding framework that achieves three seemingly conflicting objectives: (i) to reduce the coding coefficient overhead to almost n bits per packet in a generation of n packets; (ii) to operate the network using only GF(2) operations at intermediate nodes if necessary, dramatically reducing complexity in the network; (iii) to deliver an end-to-end performance that is close to that of a high-field network coding system for high-end receivers while simultaneously catering to low-end receivers that decode in GF(2). As a consequence of (ii) and (iii), Fulcrum codes have a unique trait missing so far in the network coding literature: they provide the network with the flexibility to spread computational complexity over different devices depending on their current load, network conditions, or even energy targets in a decentralized way. At the core of our framework lies the idea of precoding at the sources using an expansion field GF(2h) to increase the number of dimensions seen by the network using a linear mapping. Fulcrum codes can use any high-field linear code for precoding, e.g., Reed-Solomon, with the structure of the precode determining some of the key features of the resulting code. For example, a systematic structure provides the ability to manage heterogeneous receivers while using the same data stream. Our analysis shows that the number of additional dimensions created during precoding controls the trade-off between delay, overhead, and complexity. Our implementation and measurements show that Fulcrum achieves similar decoding probability as high field Random Linear Network Coding (RLNC) approaches but with encoders/decoders that are an order of magnitude faster.

Citations (37)

Summary

We haven't generated a summary for this paper yet.