The lockdown caused by the Covid-19 pandemic has forced many educational institutions around the world to produce video lectures in order to support their students. A popular approach was to produce video lectures with a generic layout without considering neither the type of student nor the type of device used by the student. This approach complicated the learning process (e.g., students equipped with mobile devices and limited bandwidth connections have been forced to watch video produced for large screens and infinite bandwidth availability). In this paper, we investigate if it is possible to automatically transform the original video lecture to produce smartphone suitable videos and we involve students to understand the viewing experience. We consider the video lectures available within the ONELab University video lecture catalog and we design five different heuristics based on the semantic analysis of the video lectures. The experimental evaluation considers both the quantitative and qualitative aspects and the obtained results show that it is possible to save more than 90% of the bandwidth while maintaining a viewing experience equals to the one of the original video lectures.

Automatic and smart content transformation of video lectures / Furini, M.. - (2021), pp. 1-6. (Intervento presentato al convegno 18th IEEE Annual Consumer Communications and Networking Conference, CCNC 2021 tenutosi a usa nel 2021) [10.1109/CCNC49032.2021.9369592].

Automatic and smart content transformation of video lectures

Furini M.
2021

Abstract

The lockdown caused by the Covid-19 pandemic has forced many educational institutions around the world to produce video lectures in order to support their students. A popular approach was to produce video lectures with a generic layout without considering neither the type of student nor the type of device used by the student. This approach complicated the learning process (e.g., students equipped with mobile devices and limited bandwidth connections have been forced to watch video produced for large screens and infinite bandwidth availability). In this paper, we investigate if it is possible to automatically transform the original video lecture to produce smartphone suitable videos and we involve students to understand the viewing experience. We consider the video lectures available within the ONELab University video lecture catalog and we design five different heuristics based on the semantic analysis of the video lectures. The experimental evaluation considers both the quantitative and qualitative aspects and the obtained results show that it is possible to save more than 90% of the bandwidth while maintaining a viewing experience equals to the one of the original video lectures.
2021
18th IEEE Annual Consumer Communications and Networking Conference, CCNC 2021
usa
2021
1
6
Furini, M.
Automatic and smart content transformation of video lectures / Furini, M.. - (2021), pp. 1-6. (Intervento presentato al convegno 18th IEEE Annual Consumer Communications and Networking Conference, CCNC 2021 tenutosi a usa nel 2021) [10.1109/CCNC49032.2021.9369592].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1244495
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
social impact