SILO



The weekly SILO Seminar Series is made possible through the generous support of the 3M Company and its Advanced Technology Group

3M

with additional support from the Analytics Group of the Northwestern Mutual Life Insurance Company

Northwestern Mutual

Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD

Gauri Joshi (CMU),

Date and Time: Nov 28, 2018 (12:30 PM)
Location: Orchard room (3280) at the Wisconsin Institute for Discovery Building

Abstract:

Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest workers (stragglers). Asynchronous methods can alleviate stragglers, but cause gradient staleness that can adversely affect convergence. In this work, we present the first theoretical characterization of the speed-up offered by asynchronous methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). In the second part of the talk, I will discuss a unified convergence analysis of communication-efficient distributed SGD algorithms, which include federated, elastic and decentralized averaging. The novelty in our work is that our runtime analysis considers random gradient computation and communication delays, which helps us design and compare distributed SGD algorithms that achieve the fastest true convergence with respect to wall-clock time.