Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On-line Application Autotuning Exploiting Ensemble Models (1901.06228v1)

Published 18 Jan 2019 in cs.DC

Abstract: Application autotuning is a promising path investigated in literature to improve computation efficiency. In this context, the end-users define high-level requirements and an autonomic manager is able to identify and seize optimization opportunities by leveraging trade-offs between extra-functional properties of interest, such as execution time, power consumption or quality of results. The relationship between an application configuration and the extra-functional properties might depend on the underlying architecture, on the system workload and on features of the current input. For these reasons, autotuning frameworks rely on application knowledge to drive the adaptation strategies. The autotuning task is typically done offline because having it in production requires significant effort to reduce its overhead. In this paper, we enhance a dynamic autotuning framework with a module for learning the application knowledge during the production phase, in a distributed fashion. We leverage two strategies to limit the overhead introduced at the production phase. On one hand, we use a scalable infrastructure capable of leveraging the parallelism of the underlying platform. On the other hand, we use ensemble models to speed up the predictive capabilities, while iteratively gathering production data. Experimental results on synthetic applications and on a use case show how the proposed approach is able to learn the application knowledge, by exploring a small fraction of the design space.

Citations (1)

Summary

We haven't generated a summary for this paper yet.