We present DeepWiVe, the first-ever end-to-end joint source-channel coding (JSCC) video transmission scheme that leverages the power of deep neural networks (DNNs) to directly map video signals to channel symbols, combining video compression, channel coding, and modulation steps into a single neural transform. Our DNN decoder predicts residuals without distortion feedback, which improves video quality by accounting for occlusion/disocclusion and camera movements. We simultaneously train different bandwidth allocation networks for the frames to allow variable bandwidth transmission. Then, we train a bandwidth allocation network using reinforcement learning (RL) that optimizes the allocation of limited available channel bandwidth among video frames to maximize overall visual quality. Our results show that DeepWiVe can overcome the cliff-effect, which is prevalent in conventional separation-based digital communication schemes, and achieve graceful degradation with the mismatch between the estimated and actual channel qualities. DeepWiVe outperforms H.264 video compression followed by low-density parity check (LDPC) codes in all channel conditions by up to 0.0485 on average in terms of the multi-scale structural similarity index measure (MS-SSIM).

Deep-Learning-Aided Wireless Video Transmission / Tung, T. -Y.; Gunduz, D.. - 2022-:(2022), pp. 1-5. (Intervento presentato al convegno 23rd IEEE International Workshop on Signal Processing Advances in Wireless Communication, SPAWC 2022 tenutosi a Oulu, FINLAND nel JUL 04-06, 2022) [10.1109/SPAWC51304.2022.9833990].

Deep-Learning-Aided Wireless Video Transmission

Gunduz D.
2022

Abstract

We present DeepWiVe, the first-ever end-to-end joint source-channel coding (JSCC) video transmission scheme that leverages the power of deep neural networks (DNNs) to directly map video signals to channel symbols, combining video compression, channel coding, and modulation steps into a single neural transform. Our DNN decoder predicts residuals without distortion feedback, which improves video quality by accounting for occlusion/disocclusion and camera movements. We simultaneously train different bandwidth allocation networks for the frames to allow variable bandwidth transmission. Then, we train a bandwidth allocation network using reinforcement learning (RL) that optimizes the allocation of limited available channel bandwidth among video frames to maximize overall visual quality. Our results show that DeepWiVe can overcome the cliff-effect, which is prevalent in conventional separation-based digital communication schemes, and achieve graceful degradation with the mismatch between the estimated and actual channel qualities. DeepWiVe outperforms H.264 video compression followed by low-density parity check (LDPC) codes in all channel conditions by up to 0.0485 on average in terms of the multi-scale structural similarity index measure (MS-SSIM).
2022
23rd IEEE International Workshop on Signal Processing Advances in Wireless Communication, SPAWC 2022
Oulu, FINLAND
JUL 04-06, 2022
2022-
1
5
Tung, T. -Y.; Gunduz, D.
Deep-Learning-Aided Wireless Video Transmission / Tung, T. -Y.; Gunduz, D.. - 2022-:(2022), pp. 1-5. (Intervento presentato al convegno 23rd IEEE International Workshop on Signal Processing Advances in Wireless Communication, SPAWC 2022 tenutosi a Oulu, FINLAND nel JUL 04-06, 2022) [10.1109/SPAWC51304.2022.9833990].
File in questo prodotto:
File Dimensione Formato  
Deep-Learning-Aided_Wireless_Video_Transmission.pdf

Accesso riservato

Tipologia: Versione pubblicata dall'editore
Dimensione 1.16 MB
Formato Adobe PDF
1.16 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1286030
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
social impact