A collaborative multi-agent reinforcement learning (RL) problem is considered, where agents communicate over a noisy communication channel towards achieving a common goal. In particular, we consider a remote-controlled version of a single-agent RL problem, in which the system state is observed by a guide agent, while the actions are taken by a scout. The guide can communicate to the scout over a noisy communication link, reminiscent of a remote-controlled version of the single-agent RL problem. This transformation turns the original single-agent Markov decision process (MDP) into a two-agent partially observable MDP (POMDP). In conventional systems, communication and learning tasks are taken care of separately. We show the suboptimality of this approach, and propose a deep Q-learning solution that aims at learning the optimal policy taking into account the channel impairments.
Remote Reinforcement Learning over a Noisy Channel / Roig, J. S. P.; Gunduz, D.. - (2020), pp. 1-6. (Intervento presentato al convegno IEEE Global Communications Conference (GLOBECOM) on Advanced Technology for 5G Plus tenutosi a twn nel dec. 7-11, 2020) [10.1109/GLOBECOM42002.2020.9322408].
Remote Reinforcement Learning over a Noisy Channel
Gunduz D.
2020
Abstract
A collaborative multi-agent reinforcement learning (RL) problem is considered, where agents communicate over a noisy communication channel towards achieving a common goal. In particular, we consider a remote-controlled version of a single-agent RL problem, in which the system state is observed by a guide agent, while the actions are taken by a scout. The guide can communicate to the scout over a noisy communication link, reminiscent of a remote-controlled version of the single-agent RL problem. This transformation turns the original single-agent Markov decision process (MDP) into a two-agent partially observable MDP (POMDP). In conventional systems, communication and learning tasks are taken care of separately. We show the suboptimality of this approach, and propose a deep Q-learning solution that aims at learning the optimal policy taking into account the channel impairments.Pubblicazioni consigliate
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris