Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 78 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 169 tok/s Pro
GPT OSS 120B 469 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Efficient Parallel Self-Adjusting Computation (2105.06712v1)

Published 14 May 2021 in cs.DC

Abstract: Self-adjusting computation is an approach for automatically producing dynamic algorithms from static ones. The approach works by tracking control and data dependencies, and propagating changes through the dependencies when making an update. Extensively studied in the sequential setting, some results on parallel self-adjusting computation exist, but are either only applicable to limited classes of computations, such as map-reduce, or are ad-hoc systems with no theoretical analysis of their performance. In this paper, we present the first system for parallel self-adjusting computation that applies to a wide class of nested parallel algorithms and provides theoretical bounds on the work and span of the resulting dynamic algorithms. As with bounds in the sequential setting, our bounds relate a "distance" measure between computations on different inputs to the cost of propagating an update. However, here we also consider parallelism in the propagation cost. The main innovation in the paper is in using Series-Parallel trees (SP trees) to track sequential and parallel control dependencies to allow propagation of changes to be applied safely in parallel. We show both theoretically and through experiments that our system allows algorithms to produce updated results over large datasets significantly faster than from-scratch execution. We demonstrate our system with several example applications, including algorithms for dynamic sequences and dynamic trees. In all cases studied, we show that parallel self-adjusting computation can provide a significant benefit in both work savings and parallel time.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.