Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Demystifying Serverless Machine Learning Training (2105.07806v1)

Published 17 May 2021 in cs.DC and cs.LG

Abstract: The appeal of serverless (FaaS) has triggered a growing interest on how to use it in data-intensive applications such as ETL, query processing, or ML. Several systems exist for training large-scale ML models on top of serverless infrastructures (e.g., AWS Lambda) but with inconclusive results in terms of their performance and relative advantage over "serverful" infrastructures (IaaS). In this paper we present a systematic, comparative study of distributed ML training over FaaS and IaaS. We present a design space covering design choices such as optimization algorithms and synchronization protocols, and implement a platform, LambdaML, that enables a fair comparison between FaaS and IaaS. We present experimental results using LambdaML, and further develop an analytic model to capture cost/performance tradeoffs that must be considered when opting for a serverless infrastructure. Our results indicate that ML training pays off in serverless only for models with efficient (i.e., reduced) communication and that quickly converge. In general, FaaS can be much faster but it is never significantly cheaper than IaaS.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jiawei Jiang (47 papers)
  2. Shaoduo Gan (9 papers)
  3. Yue Liu (257 papers)
  4. Fanlin Wang (1 paper)
  5. Gustavo Alonso (45 papers)
  6. Ana Klimovic (24 papers)
  7. Ankit Singla (21 papers)
  8. Wentao Wu (43 papers)
  9. Ce Zhang (215 papers)
Citations (114)

Summary

We haven't generated a summary for this paper yet.