We study scheduling of computation tasks across n workers in a large scale distributed learning problem. Computation speeds of the workers are assumed to be heterogeneous and unknown to the master, and redundant computations are assigned to the workers in order to tolerate straggling workers. We consider sequential computation and instantaneous communication from each worker to the master, and each computation round, which can model a single iteration of the stochastic gradient descent (SGD) algorithm, is completed once the master receives k ≤ n distinct computations, referred to as the computation target. Our goal is to characterize the average completion time as a function of the computation load, which denotes the portion of the dataset available at each worker, and the computation target. We propose two computation scheduling schemes that specify the computation tasks assigned to each worker, as well as their order of execution. We also establish a lower bound on the minimum average completion time. Numerical results show a significant reduction in the average computation time over the existing coded and uncoded computing schemes.

Computation Scheduling for Distributed Machine Learning with Straggling Workers / Mohammadi Amiri, M.; Gunduz, D.. - 2019-:(2019), pp. 8177-8181. (Intervento presentato al convegno 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 tenutosi a Brighton Conference Centre, gbr nel 2019) [10.1109/ICASSP.2019.8682911].

Computation Scheduling for Distributed Machine Learning with Straggling Workers

D. Gunduz
2019

Abstract

We study scheduling of computation tasks across n workers in a large scale distributed learning problem. Computation speeds of the workers are assumed to be heterogeneous and unknown to the master, and redundant computations are assigned to the workers in order to tolerate straggling workers. We consider sequential computation and instantaneous communication from each worker to the master, and each computation round, which can model a single iteration of the stochastic gradient descent (SGD) algorithm, is completed once the master receives k ≤ n distinct computations, referred to as the computation target. Our goal is to characterize the average completion time as a function of the computation load, which denotes the portion of the dataset available at each worker, and the computation target. We propose two computation scheduling schemes that specify the computation tasks assigned to each worker, as well as their order of execution. We also establish a lower bound on the minimum average completion time. Numerical results show a significant reduction in the average computation time over the existing coded and uncoded computing schemes.
2019
44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
Brighton Conference Centre, gbr
2019
2019-
8177
8181
Mohammadi Amiri, M.; Gunduz, D.
Computation Scheduling for Distributed Machine Learning with Straggling Workers / Mohammadi Amiri, M.; Gunduz, D.. - 2019-:(2019), pp. 8177-8181. (Intervento presentato al convegno 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 tenutosi a Brighton Conference Centre, gbr nel 2019) [10.1109/ICASSP.2019.8682911].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1202627
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? ND
social impact