2000 character limit reached
Distributed TD(0) with Almost No Communication (2305.16246v1)
Published 25 May 2023 in cs.LG, cs.SY, eess.SY, and math.OC
Abstract: We provide a new non-asymptotic analysis of distributed temporal difference learning with linear function approximation. Our approach relies on ``one-shot averaging,'' where $N$ agents run identical local copies of the TD(0) method and average the outcomes only once at the very end. We demonstrate a version of the linear time speedup phenomenon, where the convergence time of the distributed process is a factor of $N$ faster than the convergence time of TD(0). This is the first result proving benefits from parallelism for temporal difference methods.