Current captioning approaches can describe images using black-box architectures whose behavior is hardly controllable and explainable from the exterior. As an image can be described in infinite ways depending on the goal and the context at hand, a higher degree of controllability is needed to apply captioning algorithms in complex scenarios. In this paper, we introduce a novel framework for image captioning which can generate diverse descriptions by allowing both grounding and controllability. Given a control signal in the form of a sequence or set of image regions, we generate the corresponding caption through a recurrent architecture which predicts textual chunks explicitly grounded on regions, following the constraints of the given control. Experiments are conducted on Flickr30k Entities and on COCO Entities, an extended version of COCO in which we add grounding annotations collected in a semi-automatic manner. Results demonstrate that our method achieves state of the art performances on controllable image captioning, in terms of caption quality and diversity. Code and annotations are publicly available at: https://github.com/aimagelab/show-control-and-tell.

Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions / Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita. - 2019-:(2019), pp. 8299-8308. (Intervento presentato al convegno 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 tenutosi a Long Beach, CA, USA nel June 16-20 2019) [10.1109/CVPR.2019.00850].

Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions

Cornia, Marcella;Baraldi, Lorenzo;Cucchiara, Rita
2019

Abstract

Current captioning approaches can describe images using black-box architectures whose behavior is hardly controllable and explainable from the exterior. As an image can be described in infinite ways depending on the goal and the context at hand, a higher degree of controllability is needed to apply captioning algorithms in complex scenarios. In this paper, we introduce a novel framework for image captioning which can generate diverse descriptions by allowing both grounding and controllability. Given a control signal in the form of a sequence or set of image regions, we generate the corresponding caption through a recurrent architecture which predicts textual chunks explicitly grounded on regions, following the constraints of the given control. Experiments are conducted on Flickr30k Entities and on COCO Entities, an extended version of COCO in which we add grounding annotations collected in a semi-automatic manner. Results demonstrate that our method achieves state of the art performances on controllable image captioning, in terms of caption quality and diversity. Code and annotations are publicly available at: https://github.com/aimagelab/show-control-and-tell.
2019
32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
Long Beach, CA, USA
June 16-20 2019
2019-
8299
8308
Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita
Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions / Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita. - 2019-:(2019), pp. 8299-8308. (Intervento presentato al convegno 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 tenutosi a Long Beach, CA, USA nel June 16-20 2019) [10.1109/CVPR.2019.00850].
File in questo prodotto:
File Dimensione Formato  
2019-cvpr-captioning.pdf

Open access

Tipologia: Versione dell'autore revisionata e accettata per la pubblicazione
Dimensione 9.1 MB
Formato Adobe PDF
9.1 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1171698
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 123
  • ???jsp.display-item.citation.isi??? 73
social impact