2000 character limit reached
TonY: An Orchestrator for Distributed Machine Learning Jobs (1904.01631v1)
Published 24 Mar 2019 in cs.DC, cs.LG, and stat.ML
Abstract: Training ML models on large datasets requires considerable computing power. To speed up training, it is typical to distribute training across several machines, often with specialized hardware like GPUs or TPUs. Managing a distributed training job is complex and requires dealing with resource contention, distributed configurations, monitoring, and fault tolerance. In this paper, we describe TonY, an open-source orchestrator for distributed ML jobs built at LinkedIn to address these challenges.