Management and orchestration (MANO) of resources by virtual network functions (VNFs) represents one of the key challenges towards a fully virtualized network architecture as envisaged by 5G standards. Current threshold-based policies inefficiently over-provision network resources and under-utilize available hardware, incurring high cost for network operators, and consequently, the users. In this work, we present a MANO algorithm for VNFs allowing a central unit (CU) to learn to autonomously re-configure resources (processing power and storage), deploy new VNF instances, or offload them to the cloud, depending on the network conditions, available pool of resources, and the VNF requirements, with the goal of minimizing a cost function that takes into account the economical cost as well as latency and the quality-of-service (QoS) experienced by the users. First, we formulate the stochastic resource optimization problem as a parameterized action Markov decision process (PAMDP). Then, we propose a solution based on deep reinforcement learning (DRL). More precisely, we present a novel RL approach, called parameterized action twin (PAT) deterministic policy gradient, which leverages an actor-critic architecture to learn to provision resources to the VNFs in an online manner. Finally, we present numerical performance results, and map them to 5G key performance indicators (KPIs). To the best of our knowledge, this is the first work that considers DRL for MANO of VNFs' physical resources.

Management and Orchestration of Virtual Network Functions via Deep Reinforcement Learning / Pujol Roig, J. S.; Gutierrez-Estevez, D.; Gunduz, D.. - In: IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS. - ISSN 0733-8716. - 38:2(2020), pp. 304-317. [10.1109/JSAC.2019.2959263]

Management and Orchestration of Virtual Network Functions via Deep Reinforcement Learning

D. Gunduz
2020

Abstract

Management and orchestration (MANO) of resources by virtual network functions (VNFs) represents one of the key challenges towards a fully virtualized network architecture as envisaged by 5G standards. Current threshold-based policies inefficiently over-provision network resources and under-utilize available hardware, incurring high cost for network operators, and consequently, the users. In this work, we present a MANO algorithm for VNFs allowing a central unit (CU) to learn to autonomously re-configure resources (processing power and storage), deploy new VNF instances, or offload them to the cloud, depending on the network conditions, available pool of resources, and the VNF requirements, with the goal of minimizing a cost function that takes into account the economical cost as well as latency and the quality-of-service (QoS) experienced by the users. First, we formulate the stochastic resource optimization problem as a parameterized action Markov decision process (PAMDP). Then, we propose a solution based on deep reinforcement learning (DRL). More precisely, we present a novel RL approach, called parameterized action twin (PAT) deterministic policy gradient, which leverages an actor-critic architecture to learn to provision resources to the VNFs in an online manner. Finally, we present numerical performance results, and map them to 5G key performance indicators (KPIs). To the best of our knowledge, this is the first work that considers DRL for MANO of VNFs' physical resources.
2020
38
2
304
317
Management and Orchestration of Virtual Network Functions via Deep Reinforcement Learning / Pujol Roig, J. S.; Gutierrez-Estevez, D.; Gunduz, D.. - In: IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS. - ISSN 0733-8716. - 38:2(2020), pp. 304-317. [10.1109/JSAC.2019.2959263]
Pujol Roig, J. S.; Gutierrez-Estevez, D.; Gunduz, D.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1202572
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 43
  • ???jsp.display-item.citation.isi??? 33
social impact