The ability to generate natural language explanations conditioned on the visual perception is a crucial step towards autonomous agents which can explain themselves and communicate with humans. While the research efforts in image and video captioning are giving promising results, this is often done at the expense of the computational requirements of the approaches, limiting their applicability to real contexts. In this paper, we propose a fully-attentive captioning algorithm which can provide state-of-the-art performances on language generation while restricting its computational demands. Our model is inspired by the Transformer model and employs only two Transformer layers in the encoding and decoding stages. Further, it incorporates a novel memory-aware encoding of image regions. Experiments demonstrate that our approach achieves competitive results in terms of caption quality while featuring reduced computational demands. Further, to evaluate its applicability on autonomous agents, we conduct experiments on simulated scenes taken from the perspective of domestic robots.

SMArT: Training Shallow Memory-aware Transformers for Robotic Explainability / Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita. - (2020), pp. 1128-1134. (Intervento presentato al convegno International Conference on Robotics and Automation tenutosi a Paris, France nel May, 31 - June, 4) [10.1109/ICRA40945.2020.9196653].

SMArT: Training Shallow Memory-aware Transformers for Robotic Explainability

Cornia, Marcella;BARALDI, LORENZO;Cucchiara, Rita
2020

Abstract

The ability to generate natural language explanations conditioned on the visual perception is a crucial step towards autonomous agents which can explain themselves and communicate with humans. While the research efforts in image and video captioning are giving promising results, this is often done at the expense of the computational requirements of the approaches, limiting their applicability to real contexts. In this paper, we propose a fully-attentive captioning algorithm which can provide state-of-the-art performances on language generation while restricting its computational demands. Our model is inspired by the Transformer model and employs only two Transformer layers in the encoding and decoding stages. Further, it incorporates a novel memory-aware encoding of image regions. Experiments demonstrate that our approach achieves competitive results in terms of caption quality while featuring reduced computational demands. Further, to evaluate its applicability on autonomous agents, we conduct experiments on simulated scenes taken from the perspective of domestic robots.
2020
International Conference on Robotics and Automation
Paris, France
May, 31 - June, 4
1128
1134
Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita
SMArT: Training Shallow Memory-aware Transformers for Robotic Explainability / Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita. - (2020), pp. 1128-1134. (Intervento presentato al convegno International Conference on Robotics and Automation tenutosi a Paris, France nel May, 31 - June, 4) [10.1109/ICRA40945.2020.9196653].
File in questo prodotto:
File Dimensione Formato  
2020_ICRA_SMArT.pdf

Open access

Tipologia: AAM - Versione dell'autore revisionata e accettata per la pubblicazione
Dimensione 747.82 kB
Formato Adobe PDF
747.82 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1188090
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? 18
social impact