The interest in cultural cities is in constant growth, and so is the demand for new multimedia tools and applications that enrich their fruition. In this paper we propose an egocentric vision system to enhance tourists' cultural heritage experience. Exploiting a wearable board and a glass-mounted camera, the visitor can retrieve architectural details of the historical building he is observing and receive related multimedia contents. To obtain an effective retrieval procedure we propose a visual descriptor based on the covariance of local features. Differently than the common Bag of Words approaches our feature vector does not rely on a generated visual vocabulary, removing the dependence from a specific dataset and obtaining a reduction of the computational cost. 3D modeling is used to achieve a precise visitor's localization that allows browsing visible relevant details that the user may otherwise miss. Experimental results conducted on a publicly available cultural heritage dataset show that the proposed feature descriptor outperforms Bag of Words techniques.

Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences / Alletto, Stefano; Serra, Giuseppe; Cucchiara, Rita. - (2015), pp. 134-139. (Intervento presentato al convegno 7th International Conference on Intelligent Technologies for Interactive Entertainment, INTETAIN 2015 tenutosi a Torino nel 10-12 June 2015) [10.4108/icst.intetain.2015.260034].

Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences

ALLETTO, STEFANO;SERRA, GIUSEPPE;CUCCHIARA, Rita
2015

Abstract

The interest in cultural cities is in constant growth, and so is the demand for new multimedia tools and applications that enrich their fruition. In this paper we propose an egocentric vision system to enhance tourists' cultural heritage experience. Exploiting a wearable board and a glass-mounted camera, the visitor can retrieve architectural details of the historical building he is observing and receive related multimedia contents. To obtain an effective retrieval procedure we propose a visual descriptor based on the covariance of local features. Differently than the common Bag of Words approaches our feature vector does not rely on a generated visual vocabulary, removing the dependence from a specific dataset and obtaining a reduction of the computational cost. 3D modeling is used to achieve a precise visitor's localization that allows browsing visible relevant details that the user may otherwise miss. Experimental results conducted on a publicly available cultural heritage dataset show that the proposed feature descriptor outperforms Bag of Words techniques.
2015
7th International Conference on Intelligent Technologies for Interactive Entertainment, INTETAIN 2015
Torino
10-12 June 2015
134
139
Alletto, Stefano; Serra, Giuseppe; Cucchiara, Rita
Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences / Alletto, Stefano; Serra, Giuseppe; Cucchiara, Rita. - (2015), pp. 134-139. (Intervento presentato al convegno 7th International Conference on Intelligent Technologies for Interactive Entertainment, INTETAIN 2015 tenutosi a Torino nel 10-12 June 2015) [10.4108/icst.intetain.2015.260034].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1084045
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 2
social impact