We propose a novel formulation of the 'effectiveness problem' in communications, put forth by Shannon and Weaver in their seminal work 'The Mathematical Theory of Communication', by considering multiple agents communicating over a noisy channel in order to achieve better coordination and cooperation in a multi-agent reinforcement learning (MARL) framework. Specifically, we consider a multi-agent partially observable Markov decision process (MA-POMDP), in which the agents, in addition to interacting with the environment, can also communicate with each other over a noisy communication channel. The noisy communication channel is considered explicitly as part of the dynamics of the environment, and the message each agent sends is part of the action that the agent can take. As a result, the agents learn not only to collaborate with each other but also to communicate 'effectively' over a noisy channel. This framework generalizes both the traditional communication problem, where the main goal is to convey a message reliably over a noisy channel, and the 'learning to communicate' framework that has received recent attention in the MARL literature, where the underlying communication channels are assumed to be error-free. We show via examples that the joint policy learned using the proposed framework is superior to that where the communication is considered separately from the underlying MA-POMDP. This is a very powerful framework, which has many real world applications, from autonomous vehicle planning to drone swarm control, and opens up the rich toolbox of deep reinforcement learning for the design of multi-user communication systems.

Effective Communications: A Joint Learning and Communication Framework for Multi-Agent Reinforcement Learning over Noisy Channels / Tung, T. -Y.; Kobus, S.; Roig, J. P.; Gunduz, D.. - In: IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS. - ISSN 0733-8716. - 39:8(2021), pp. 2590-2603. [10.1109/JSAC.2021.3087248]

Effective Communications: A Joint Learning and Communication Framework for Multi-Agent Reinforcement Learning over Noisy Channels

Gunduz D.
2021

Abstract

We propose a novel formulation of the 'effectiveness problem' in communications, put forth by Shannon and Weaver in their seminal work 'The Mathematical Theory of Communication', by considering multiple agents communicating over a noisy channel in order to achieve better coordination and cooperation in a multi-agent reinforcement learning (MARL) framework. Specifically, we consider a multi-agent partially observable Markov decision process (MA-POMDP), in which the agents, in addition to interacting with the environment, can also communicate with each other over a noisy communication channel. The noisy communication channel is considered explicitly as part of the dynamics of the environment, and the message each agent sends is part of the action that the agent can take. As a result, the agents learn not only to collaborate with each other but also to communicate 'effectively' over a noisy channel. This framework generalizes both the traditional communication problem, where the main goal is to convey a message reliably over a noisy channel, and the 'learning to communicate' framework that has received recent attention in the MARL literature, where the underlying communication channels are assumed to be error-free. We show via examples that the joint policy learned using the proposed framework is superior to that where the communication is considered separately from the underlying MA-POMDP. This is a very powerful framework, which has many real world applications, from autonomous vehicle planning to drone swarm control, and opens up the rich toolbox of deep reinforcement learning for the design of multi-user communication systems.
2021
39
8
2590
2603
Effective Communications: A Joint Learning and Communication Framework for Multi-Agent Reinforcement Learning over Noisy Channels / Tung, T. -Y.; Kobus, S.; Roig, J. P.; Gunduz, D.. - In: IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS. - ISSN 0733-8716. - 39:8(2021), pp. 2590-2603. [10.1109/JSAC.2021.3087248]
Tung, T. -Y.; Kobus, S.; Roig, J. P.; Gunduz, D.
File in questo prodotto:
File Dimensione Formato  
Effective_Communications_A_Joint_Learning_and_Communication_Framework_for_Multi-Agent_Reinforcement_Learning_Over_Noisy_Channels.pdf

Accesso riservato

Tipologia: VOR - Versione pubblicata dall'editore
Dimensione 2.06 MB
Formato Adobe PDF
2.06 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1280044
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 65
  • ???jsp.display-item.citation.isi??? 43
social impact