Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Real-Time Analytics by Coordinating Reuse and Work Sharing (2307.08018v1)

Published 16 Jul 2023 in cs.DB

Abstract: Analytical tools often require real-time responses for highly concurrent parameterized workloads. A common solution is to answer queries using materialized subexpressions, hence reducing processing at runtime. However, as queries are still processed individually, concurrent outstanding computations accumulate and increase response times. By contrast, shared execution mitigates the effect of concurrency and improves scalability by exploiting overlapping work between queries but does so using heavyweight shared operators that result in high response times. Thus, on their own, both reuse and work sharing fail to provide real-time responses for large batches. Furthermore, naively combining the two approaches is ineffective and can deteriorate performance due to increased filtering costs, reduced marginal benefits, and lower reusability. In this work, we present ParCuR, a framework that harmonizes reuse with work sharing. ParCuR adapts reuse to work sharing in four aspects: i) to reduce filtering costs, it builds access methods on materialized results, ii) to resolve the conflict between benefits from work sharing and materialization, it introduces a sharing-aware materialization policy, iii) to incorporate reuse into sharing-aware optimization, it introduces a two-phase optimization strategy, and iv) to improve reusability and to avoid performance cliffs when queries are partially covered, especially during workload shifts, it combines partial reuse with data clustering based on historical batches. ParCuR outperforms a state-of-the-art work-sharing database by 6.4x and 2x in the SSB and TPC-H benchmarks respectively

Citations (1)

Summary

We haven't generated a summary for this paper yet.