The use of emotions has recently been considered to improve the indexing of video contents and two different approaches are usually followed: computation of objective emotions through low-level video features analysis and computation of subjective emotions through analysis of the viewers' physical signals. In this paper, we propose a different approach and we present ViMood, a novel mechanism designed to improve the indexing of video material by integrating objective and subjective emotions. ViMood indexes every video scene with emotion(s) obtained through a combination of low-level feature analysis and on-the-fly viewer's emotion annotation. The goal is to allow viewers to browse video material using either general information (e.g., title, director) or specific emotions (e.g., 'joy', 'sadness', 'surprise'). Results obtained in the evaluation process showed that participants were very interested in the hybrid approach, as it fixes some of the problems of the objective and subjective approaches.
|Data di pubblicazione:||2015|
|Titolo:||ViMood: Using social emotions to improve video indexing|
|Digital Object Identifier (DOI):||10.1109/CCNC.2015.7158073|
|Nome del convegno:||2015 12th Annual IEEE Consumer Communications and Networking Conference, CCNC 2015|
|Luogo del convegno:||Las Vegas, Nevada, USA|
|Data del convegno:||9-12 January 2015|
|Tipologia||Relazione in Atti di Convegno|
I documenti presenti in Iris Unimore sono rilasciati con licenza Creative Commons Attribuzione - Non commerciale - Non opere derivate 3.0 Italia, salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris