Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed.

Assistive Navigation using Deep Reinforcement Learning Guiding Robot with UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People / Chen-Lung, Lu; Zi-Yan, Liu; Jui-Te, Huang; Ching-I, Huang; Bo-Hui, Wang; Chen, Yi; Nien-Hsin, Wu; Hsueh-Cheng Nick Wang, ; Giarrè, Laura; Pei-Yi, Kuo. - In: FRONTIERS IN ROBOTICS AND AI. - ISSN 2296-9144. - 8:(2021), pp. 1-23. [10.3389/frobt.2021.654132]

Assistive Navigation using Deep Reinforcement Learning Guiding Robot with UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People

Laura Giarré;
2021

Abstract

Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed.
2021
8
1
23
Assistive Navigation using Deep Reinforcement Learning Guiding Robot with UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People / Chen-Lung, Lu; Zi-Yan, Liu; Jui-Te, Huang; Ching-I, Huang; Bo-Hui, Wang; Chen, Yi; Nien-Hsin, Wu; Hsueh-Cheng Nick Wang, ; Giarrè, Laura; Pei-Yi, Kuo. - In: FRONTIERS IN ROBOTICS AND AI. - ISSN 2296-9144. - 8:(2021), pp. 1-23. [10.3389/frobt.2021.654132]
Chen-Lung, Lu; Zi-Yan, Liu; Jui-Te, Huang; Ching-I, Huang; Bo-Hui, Wang; Chen, Yi; Nien-Hsin, Wu; Hsueh-Cheng Nick Wang, ; Giarrè, Laura; Pei-Yi, Kuo...espandi
File in questo prodotto:
File Dimensione Formato  
frobt-08-654132.pdf

Open access

Tipologia: Versione pubblicata dall'editore
Dimensione 3.1 MB
Formato Adobe PDF
3.1 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1245875
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 15
social impact