Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Packet Reordering in Time-Sensitive Networks (2008.03075v6)

Published 7 Aug 2020 in cs.NI

Abstract: Time-sensitive networks (IEEE TSN or IETF DetNet) may tolerate some packet reordering. Re-sequencing buffers are then used to provide in-order delivery, the parameters of which (timeout, buffer size) may affect worst-case delay and delay jitter. There is so far no precise understanding of per-flow reordering metrics nor of the dimensioning of re-sequencing buffers in order to provide worst-case guarantees, as required in such networks. First, we show that a previously proposed per-flow metric, reordering late time offset (RTO), determines the timeout value. If the network is lossless, another previously defined metric, the reordering byte offset (RBO), determines the required buffer. If packet losses cannot be ignored, the required buffer may be larger than RBO, and depends on jitter, an arrival curve of the flow at its source, and the timeout. Then we develop a calculus to compute the RTO for a flow path; the method uses a novel relation with jitter and arrival curve, together with a decomposition of the path into non order-preserving and order-preserving elements. We also analyse the effect of re-sequencing buffers on worst-case delay, jitter and propagation of arrival curves. We show in particular that, in a lossless (but non order-preserving) network, re-sequencing is "for free", namely, it does not increase worst-case delay nor jitter, whereas in a lossy network, re-sequencing increases the worst-case delay and jitter. We apply the analysis to evaluate the performance impact of placing re-sequencing buffers at intermediate points and illustrate the results on two industrial test cases.

Citations (13)

Summary

We haven't generated a summary for this paper yet.