Autonomous and assisted driving are undoubtedly hot topics in computer vision. However, the driving task is extremely complex and a deep understanding of drivers' behavior is still lacking. Several researchers are now investigating the attention mechanism in order to define computational models for detecting salient and interesting objects in the scene. Nevertheless, most of these models only refer to bottom up visual saliency and are focused on still images. Instead, during the driving experience the temporal nature and peculiarity of the task influence the attention mechanisms, leading to the conclusion that real life driving data is mandatory. In this paper we propose a novel and publicly available dataset acquired during actual driving. Our dataset, composed by more than 500,000 frames, contains drivers' gaze fixations and their temporal integration providing task-specific saliency maps. Geo-referenced locations, driving speed and course complete the set of released data. To the best of our knowledge, this is the first publicly available dataset of this kind and can foster new discussions on better understanding, exploiting and reproducing the driver's attention process in the autonomous and assisted cars of future generations.

DR(eye)VE: a Dataset for Attention-Based Tasks with Applications to Autonomous and Assisted Driving / Alletto, Stefano; Palazzi, Andrea; Solera, Francesco; Calderara, Simone; Cucchiara, Rita. - (2016). (Intervento presentato al convegno IEEE Internation Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) tenutosi a Las Vegas nel 2016) [10.1109/CVPRW.2016.14].

DR(eye)VE: a Dataset for Attention-Based Tasks with Applications to Autonomous and Assisted Driving

ALLETTO, STEFANO;PALAZZI, ANDREA;SOLERA, FRANCESCO;CALDERARA, Simone;CUCCHIARA, Rita
2016

Abstract

Autonomous and assisted driving are undoubtedly hot topics in computer vision. However, the driving task is extremely complex and a deep understanding of drivers' behavior is still lacking. Several researchers are now investigating the attention mechanism in order to define computational models for detecting salient and interesting objects in the scene. Nevertheless, most of these models only refer to bottom up visual saliency and are focused on still images. Instead, during the driving experience the temporal nature and peculiarity of the task influence the attention mechanisms, leading to the conclusion that real life driving data is mandatory. In this paper we propose a novel and publicly available dataset acquired during actual driving. Our dataset, composed by more than 500,000 frames, contains drivers' gaze fixations and their temporal integration providing task-specific saliency maps. Geo-referenced locations, driving speed and course complete the set of released data. To the best of our knowledge, this is the first publicly available dataset of this kind and can foster new discussions on better understanding, exploiting and reproducing the driver's attention process in the autonomous and assisted cars of future generations.
2016
IEEE Internation Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Las Vegas
2016
Alletto, Stefano; Palazzi, Andrea; Solera, Francesco; Calderara, Simone; Cucchiara, Rita
DR(eye)VE: a Dataset for Attention-Based Tasks with Applications to Autonomous and Assisted Driving / Alletto, Stefano; Palazzi, Andrea; Solera, Francesco; Calderara, Simone; Cucchiara, Rita. - (2016). (Intervento presentato al convegno IEEE Internation Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) tenutosi a Las Vegas nel 2016) [10.1109/CVPRW.2016.14].
File in questo prodotto:
File Dimensione Formato  
Alletto_DREyeVe_A_Dataset_CVPR_2016_paper.pdf

Open access

Tipologia: Versione pubblicata dall'editore
Dimensione 2.21 MB
Formato Adobe PDF
2.21 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1104385
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 78
  • ???jsp.display-item.citation.isi??? 52
social impact