Federated learning (FL) enables workers to learn a model collaboratively by using their local data, with the help of a parameter server (PS) for global model aggregation. The high communication cost for periodic model updates and the nonindependent and identically distributed (i.i.d.) data become major bottlenecks for FL. In this work, we consider analog aggregation to scale down the communication cost with respect to the number of workers, and introduce data redundancy to the system to deal with non-i.i.d. data. We propose an online energy-aware dynamic worker scheduling policy, which maximizes the average number of workers scheduled for gradient update at each iteration under a long-term energy constraint, and analyze its performance based on Lyapunov optimization. Experiments using MNIST dataset show that, for non-i.i.d. data, doubling data storage can improve the accuracy by 9.8 under a stringent energy budget, while the proposed policy can achieve close-to-optimal accuracy without violating the energy constraint.

Energy-Aware Analog Aggregation for Federated Learning with Redundant Data / Sun, Y.; Zhou, S.; Gunduz, D.. - 2020-:(2020), pp. 1-7. (Intervento presentato al convegno 2020 IEEE International Conference on Communications, ICC 2020 tenutosi a Convention Centre Dublin, irl nel 2020) [10.1109/ICC40277.2020.9148853].

Energy-Aware Analog Aggregation for Federated Learning with Redundant Data

Gunduz D.
2020

Abstract

Federated learning (FL) enables workers to learn a model collaboratively by using their local data, with the help of a parameter server (PS) for global model aggregation. The high communication cost for periodic model updates and the nonindependent and identically distributed (i.i.d.) data become major bottlenecks for FL. In this work, we consider analog aggregation to scale down the communication cost with respect to the number of workers, and introduce data redundancy to the system to deal with non-i.i.d. data. We propose an online energy-aware dynamic worker scheduling policy, which maximizes the average number of workers scheduled for gradient update at each iteration under a long-term energy constraint, and analyze its performance based on Lyapunov optimization. Experiments using MNIST dataset show that, for non-i.i.d. data, doubling data storage can improve the accuracy by 9.8 under a stringent energy budget, while the proposed policy can achieve close-to-optimal accuracy without violating the energy constraint.
2020
2020 IEEE International Conference on Communications, ICC 2020
Convention Centre Dublin, irl
2020
2020-
1
7
Sun, Y.; Zhou, S.; Gunduz, D.
Energy-Aware Analog Aggregation for Federated Learning with Redundant Data / Sun, Y.; Zhou, S.; Gunduz, D.. - 2020-:(2020), pp. 1-7. (Intervento presentato al convegno 2020 IEEE International Conference on Communications, ICC 2020 tenutosi a Convention Centre Dublin, irl nel 2020) [10.1109/ICC40277.2020.9148853].
File in questo prodotto:
File Dimensione Formato  
09148853.pdf

Accesso riservato

Tipologia: Versione pubblicata dall'editore
Dimensione 809.66 kB
Formato Adobe PDF
809.66 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1220046
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 83
  • ???jsp.display-item.citation.isi??? 67
social impact