Pervasive services may have to rely on multimodal classification to implement situation-recognition. However, the effectiveness of current multimodal classifiers is often not satisfactory. In this paper, we describe a novel approach to multimodal classification based on integrating a vision sensor with a commonsense knowledge base. Specifically, our approach is based on extracting the individual objects perceived by a camera and classifying them individually with non-parametric algorithms; then, using a commonsense knowledge base, classifying the overall scene with high effectiveness. Such classification results can then be fused together with other sensors, again on a commonsense basis, for both improving classification accuracy and dealing with missing labels. Experimental results are presented to assess, under different configurations, the effectiveness of our vision sensor and its integration with other kinds of sensors, proving that the approach is effective and able to correctly recognize a number of situations in open-ended environments.

Bridging vision and commonsense for multimodal situation recognition in pervasive systems / Bicocchi, Nicola; Lasagni, Matteo; Zambonelli, Franco. - STAMPA. - (2012), pp. 48-56. ((Intervento presentato al convegno IEEE International Conference on Pervasive Computing and Communications tenutosi a Lugano, CH nel 19 - 23 March.

Bridging vision and commonsense for multimodal situation recognition in pervasive systems

BICOCCHI, Nicola;LASAGNI, Matteo;ZAMBONELLI, Franco
2012

Abstract

Pervasive services may have to rely on multimodal classification to implement situation-recognition. However, the effectiveness of current multimodal classifiers is often not satisfactory. In this paper, we describe a novel approach to multimodal classification based on integrating a vision sensor with a commonsense knowledge base. Specifically, our approach is based on extracting the individual objects perceived by a camera and classifying them individually with non-parametric algorithms; then, using a commonsense knowledge base, classifying the overall scene with high effectiveness. Such classification results can then be fused together with other sensors, again on a commonsense basis, for both improving classification accuracy and dealing with missing labels. Experimental results are presented to assess, under different configurations, the effectiveness of our vision sensor and its integration with other kinds of sensors, proving that the approach is effective and able to correctly recognize a number of situations in open-ended environments.
IEEE International Conference on Pervasive Computing and Communications
Lugano, CH
19 - 23 March
48
56
Bicocchi, Nicola; Lasagni, Matteo; Zambonelli, Franco
Bridging vision and commonsense for multimodal situation recognition in pervasive systems / Bicocchi, Nicola; Lasagni, Matteo; Zambonelli, Franco. - STAMPA. - (2012), pp. 48-56. ((Intervento presentato al convegno IEEE International Conference on Pervasive Computing and Communications tenutosi a Lugano, CH nel 19 - 23 March.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/738479
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 9
social impact