In cognitive radio (CR) networks, the cognition cycle, i.e., the ability of wireless transceivers to learn the optimal configuration meeting environmen- tal and application requirements, is considered as important as the hardware components which enable the dynamic spectrum access (DSA) capabilities. To this purpose, several machine learning (ML) techniques have been applied on CR spectrum and network management issues, including spectrum sensing, spectrum selection, and routing. In this paper, we focus on reinforcement learning (RL), an online ML paradigm where an agent discovers the optimal sequence of actions required to perform a task via trial-end-error interactions with the environment. Our study provides both a survey and a proof of concept of RL applications in CR networking. As a survey, we discuss pros and cons of the RL framework compared to other ML techniques, and we provide an exhaustive review of the RL-CR literature, by considering a twofold perspective, i.e., an applicationdriven taxonomy and a learning methodology-driven taxonomy. As a proof of concept, we investigate the application of RL techniques on joint spectrum sensing and decision problems, by comparing different algorithms and learning strategies and by further analyzing the impact of information sharing techniques in purely cooperative or mixed cooperative/competitive tasks.

Reinforcement learning-based spectrum management for cognitive radio networks: A literature review and case study / Di Felice, M.; Bedogni, L.; Bononi, L.. - 3-3:(2019), pp. 1849-1886. [10.1007/978-981-10-1394-2_58]

Reinforcement learning-based spectrum management for cognitive radio networks: A literature review and case study

Bedogni L.;
2019

Abstract

In cognitive radio (CR) networks, the cognition cycle, i.e., the ability of wireless transceivers to learn the optimal configuration meeting environmen- tal and application requirements, is considered as important as the hardware components which enable the dynamic spectrum access (DSA) capabilities. To this purpose, several machine learning (ML) techniques have been applied on CR spectrum and network management issues, including spectrum sensing, spectrum selection, and routing. In this paper, we focus on reinforcement learning (RL), an online ML paradigm where an agent discovers the optimal sequence of actions required to perform a task via trial-end-error interactions with the environment. Our study provides both a survey and a proof of concept of RL applications in CR networking. As a survey, we discuss pros and cons of the RL framework compared to other ML techniques, and we provide an exhaustive review of the RL-CR literature, by considering a twofold perspective, i.e., an applicationdriven taxonomy and a learning methodology-driven taxonomy. As a proof of concept, we investigate the application of RL techniques on joint spectrum sensing and decision problems, by comparing different algorithms and learning strategies and by further analyzing the impact of information sharing techniques in purely cooperative or mixed cooperative/competitive tasks.
2019
Handbook of Cognitive Radio
978-981-10-1393-5
978-981-10-1394-2
Springer Singapore
Reinforcement learning-based spectrum management for cognitive radio networks: A literature review and case study / Di Felice, M.; Bedogni, L.; Bononi, L.. - 3-3:(2019), pp. 1849-1886. [10.1007/978-981-10-1394-2_58]
Di Felice, M.; Bedogni, L.; Bononi, L.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1197999
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact