The recognition of the activity of texting while driving is an open problem in literature and it is crucial for the security within the scope of automotive. This can bring to life new insurance policies and increase the overall safety on the roads. Many works in literature leverage smartphone sensors for this purpose, however it is shown that these methods take a considerable amount of time to perform a recognition with sufficient confidence. In this paper we propose to leverage the smartphone front camera to perform an image classification and recognize whether the subject is seated in the driver position or in the passenger position. We first applied standalone Convolutional Neural Networks with poor results, then we focused on object detection-based algorithms to detect the presence and the position of discriminant objects (i.e. the security belts and the car win-dow). We then applied the model over short videos by classifying frame by frame until reaching a satisfactory confidence. Results show that we are able to reach around 90 % accuracy in only few seconds of the video, demonstrating the applicability of our method in the real world.

Texting and Driving Recognition leveraging the Front Camera of Smartphones / Montori, F.; Spallone, M.; Bedogni, L.. - 2023-:(2023), pp. 1098-1103. (Intervento presentato al convegno 20th IEEE Consumer Communications and Networking Conference, CCNC 2023 tenutosi a usa nel 2023) [10.1109/CCNC51644.2023.10060838].

Texting and Driving Recognition leveraging the Front Camera of Smartphones

Bedogni L.
2023

Abstract

The recognition of the activity of texting while driving is an open problem in literature and it is crucial for the security within the scope of automotive. This can bring to life new insurance policies and increase the overall safety on the roads. Many works in literature leverage smartphone sensors for this purpose, however it is shown that these methods take a considerable amount of time to perform a recognition with sufficient confidence. In this paper we propose to leverage the smartphone front camera to perform an image classification and recognize whether the subject is seated in the driver position or in the passenger position. We first applied standalone Convolutional Neural Networks with poor results, then we focused on object detection-based algorithms to detect the presence and the position of discriminant objects (i.e. the security belts and the car win-dow). We then applied the model over short videos by classifying frame by frame until reaching a satisfactory confidence. Results show that we are able to reach around 90 % accuracy in only few seconds of the video, demonstrating the applicability of our method in the real world.
2023
20th IEEE Consumer Communications and Networking Conference, CCNC 2023
usa
2023
2023-
1098
1103
Montori, F.; Spallone, M.; Bedogni, L.
Texting and Driving Recognition leveraging the Front Camera of Smartphones / Montori, F.; Spallone, M.; Bedogni, L.. - 2023-:(2023), pp. 1098-1103. (Intervento presentato al convegno 20th IEEE Consumer Communications and Networking Conference, CCNC 2023 tenutosi a usa nel 2023) [10.1109/CCNC51644.2023.10060838].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1315547
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact