The use of Recurrent Neural Networks for video captioning has recently gained a lot of attention, since they can be used both to encode the input video and to generate the corresponding description. In this paper, we present a recurrent video encoding scheme which can discover and leverage the hierarchical structure of the video. Unlike the classical encoder-decoder approach, in which a video is encoded continuously by a recurrent layer, we propose a novel LSTM cell, which can identify discontinuity points between frames or segments and modify the temporal connections of the encoding layer accordingly. We evaluate our approach on three large-scale datasets: the Montreal Video Annotation dataset, the MPII Movie Description dataset and the Microsoft Video Description Corpus. Experiments show that our approach can discover appropriate hierarchical representations of input videos and improve the state of the art results on movie description datasets.

Hierarchical Boundary-Aware Neural Encoder for Video Captioning / Baraldi, Lorenzo; Grana, Costantino; Cucchiara, Rita. - 2017-:(2017), pp. 3185-3194. (Intervento presentato al convegno 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 tenutosi a Honolulu, Hawaii nel July, 22-25) [10.1109/CVPR.2017.339].

Hierarchical Boundary-Aware Neural Encoder for Video Captioning

BARALDI, LORENZO;GRANA, Costantino;CUCCHIARA, Rita
2017

Abstract

The use of Recurrent Neural Networks for video captioning has recently gained a lot of attention, since they can be used both to encode the input video and to generate the corresponding description. In this paper, we present a recurrent video encoding scheme which can discover and leverage the hierarchical structure of the video. Unlike the classical encoder-decoder approach, in which a video is encoded continuously by a recurrent layer, we propose a novel LSTM cell, which can identify discontinuity points between frames or segments and modify the temporal connections of the encoding layer accordingly. We evaluate our approach on three large-scale datasets: the Montreal Video Annotation dataset, the MPII Movie Description dataset and the Microsoft Video Description Corpus. Experiments show that our approach can discover appropriate hierarchical representations of input videos and improve the state of the art results on movie description datasets.
2017
30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
Honolulu, Hawaii
July, 22-25
2017-
3185
3194
Baraldi, Lorenzo; Grana, Costantino; Cucchiara, Rita
Hierarchical Boundary-Aware Neural Encoder for Video Captioning / Baraldi, Lorenzo; Grana, Costantino; Cucchiara, Rita. - 2017-:(2017), pp. 3185-3194. (Intervento presentato al convegno 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 tenutosi a Honolulu, Hawaii nel July, 22-25) [10.1109/CVPR.2017.339].
File in questo prodotto:
File Dimensione Formato  
2017CVPR.pdf

Open access

Tipologia: Versione dell'autore revisionata e accettata per la pubblicazione
Dimensione 1.64 MB
Formato Adobe PDF
1.64 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1127537
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 144
  • ???jsp.display-item.citation.isi??? 111
social impact