We study federated learning (FL), where power-limited wireless devices utilize their local datasets to collaboratively train a global model with the help of a remote parameter server (PS). The PS has access to the global model and shares it with the devices for local training using their datasets, and the devices return the result of their local updates to the PS to update the global model. The algorithm continues until the convergence of the global model. This framework requires downlink transmission from the PS to the devices and uplink transmission from the devices to the PS. The goal of this study is to investigate the impact of the bandwidth-limited shared wireless medium on the performance of FL with a focus on the downlink. To this end, the downlink and uplink channels are modeled as fading broadcast and multiple access channels, respectively, both with limited bandwidth. For downlink transmission, we first introduce a digital approach, where a quantization technique is employed at the PS followed by a capacity-Achieving channel code to transmit the global model update over the wireless broadcast channel at a common rate such that all the devices can decode it. Next, we propose analog downlink transmission, where the global model is broadcast by the PS in an uncoded manner. We consider analog transmission over the uplink in both cases, since its superiority over digital transmission for uplink has been well studied in the literature. We further analyze the convergence behavior of the proposed analog transmission approach over the downlink assuming that the uplink transmission is error-free. Numerical experiments show that the analog downlink approach provides significant improvement over the digital one with a more notable improvement when the data distribution across the devices is not independent and identically distributed. The experimental results corroborate the convergence analysis, and show that a smaller number of local iterations should be used when the data distribution is more biased, and also when the devices have a better estimate of the global model in the analog downlink approach.

Convergence of Federated Learning over a Noisy Downlink / Amiri, M. M.; Gunduz, D.; Kulkarni, S. R.; Poor, H. V.. - In: IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS. - ISSN 1536-1276. - 21:3(2022), pp. 1422-1437. [10.1109/TWC.2021.3103874]

Convergence of Federated Learning over a Noisy Downlink

Gunduz D.;
2022

Abstract

We study federated learning (FL), where power-limited wireless devices utilize their local datasets to collaboratively train a global model with the help of a remote parameter server (PS). The PS has access to the global model and shares it with the devices for local training using their datasets, and the devices return the result of their local updates to the PS to update the global model. The algorithm continues until the convergence of the global model. This framework requires downlink transmission from the PS to the devices and uplink transmission from the devices to the PS. The goal of this study is to investigate the impact of the bandwidth-limited shared wireless medium on the performance of FL with a focus on the downlink. To this end, the downlink and uplink channels are modeled as fading broadcast and multiple access channels, respectively, both with limited bandwidth. For downlink transmission, we first introduce a digital approach, where a quantization technique is employed at the PS followed by a capacity-Achieving channel code to transmit the global model update over the wireless broadcast channel at a common rate such that all the devices can decode it. Next, we propose analog downlink transmission, where the global model is broadcast by the PS in an uncoded manner. We consider analog transmission over the uplink in both cases, since its superiority over digital transmission for uplink has been well studied in the literature. We further analyze the convergence behavior of the proposed analog transmission approach over the downlink assuming that the uplink transmission is error-free. Numerical experiments show that the analog downlink approach provides significant improvement over the digital one with a more notable improvement when the data distribution across the devices is not independent and identically distributed. The experimental results corroborate the convergence analysis, and show that a smaller number of local iterations should be used when the data distribution is more biased, and also when the devices have a better estimate of the global model in the analog downlink approach.
2022
21
3
1422
1437
Convergence of Federated Learning over a Noisy Downlink / Amiri, M. M.; Gunduz, D.; Kulkarni, S. R.; Poor, H. V.. - In: IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS. - ISSN 1536-1276. - 21:3(2022), pp. 1422-1437. [10.1109/TWC.2021.3103874]
Amiri, M. M.; Gunduz, D.; Kulkarni, S. R.; Poor, H. V.
File in questo prodotto:
File Dimensione Formato  
Convergence_of_Federated_Learning_Over_a_Noisy_Downlink.pdf

Accesso riservato

Tipologia: Versione pubblicata dall'editore
Dimensione 984.77 kB
Formato Adobe PDF
984.77 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
2008.11141.pdf

Open Access dal 01/04/2024

Tipologia: Versione dell'autore revisionata e accettata per la pubblicazione
Dimensione 412.58 kB
Formato Adobe PDF
412.58 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1280015
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 29
  • ???jsp.display-item.citation.isi??? 24
social impact