A parallel workload has extreme variability (1611.04167v2)
Abstract: In both high-performance computing (HPC) environments and the public cloud, the duration of time to retrieve or save your results is simultaneously unpredictable and important to your over all resource budget. It is generally accepted ("Google: Taming the Long Latency Tail - When More Machines Equals Worse Results", Todd Hoff, highscalability.com 2012), but without a robust explanation, that identical parallel tasks do take different durations to complete -- a phenomena known as variability. This paper advances understanding of this topic. We carefully choose a model from which system-level complexity emerges that can be studied directly. We find that a generalized extreme value (GEV) model for variability naturally emerges. Using the public cloud, we find real-world observations have excellent agreement with our model. Since the GEV distribution is a limit distribution this suggests a universal property of parallel systems gated by the slowest communication element of some sort. Hence, this model is applicable to a variety of processing and IO tasks in parallel environments. These findings have important implications, ranging from characterizing ideal performance for parallel codes to detecting degraded behaviour at extreme scales.