Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Avoiding communication in primal and dual block coordinate descent methods (1612.04003v2)

Published 13 Dec 2016 in cs.DC

Abstract: Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $s$ iterations--where $s$ is a tuning parameter--instead of every iteration for the \textit{regularized least-squares problem}. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $s$ on distributed-memory parallel machines without altering the convergence rate and attains strong scaling speedups of up to $6.1\times$ on a Cray XC30 supercomputer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Aditya Devarakonda (9 papers)
  2. Kimon Fountoulakis (33 papers)
  3. James Demmel (54 papers)
  4. Michael W. Mahoney (233 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.