Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infrastructures of tactile trails or guide dogs are expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities of existing navigation solutions for BVI users. We proposed an autonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from both the virtual and realworld environments. Our work included major contributions: 1) conducting experiments to verify that the performance of our models trained in virtual worlds was comparable to that of models trained in the real world; 2) conducting user studies with 10 blind users to verify that the proposed robotic guide dog could effectively assist them in reliably following man-made trails.

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired-Learning from Virtual and Real Worlds / Tzu-Kuan, Chuang; Ni-Ching, Lin; Jih-Shi, Chen; Chen-Hao, Hung; Yi-Wei, Huang; Chunchih, Tengl; Haikun, Huang; Lap-Fai, Yu; Giarrè, Laura; Hsueh-Cheng, Wang. - (2018). (Intervento presentato al convegno 2018 IEEE International Conference on Robotics and Automation (ICRA) tenutosi a Brisbane, Australia nel 2018) [10.1109/ICRA.2018.8460994].

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired-Learning from Virtual and Real Worlds

Laura Giarré;
2018

Abstract

Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infrastructures of tactile trails or guide dogs are expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities of existing navigation solutions for BVI users. We proposed an autonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from both the virtual and realworld environments. Our work included major contributions: 1) conducting experiments to verify that the performance of our models trained in virtual worlds was comparable to that of models trained in the real world; 2) conducting user studies with 10 blind users to verify that the proposed robotic guide dog could effectively assist them in reliably following man-made trails.
2018
2018 IEEE International Conference on Robotics and Automation (ICRA)
Brisbane, Australia
2018
Tzu-Kuan, Chuang; Ni-Ching, Lin; Jih-Shi, Chen; Chen-Hao, Hung; Yi-Wei, Huang; Chunchih, Tengl; Haikun, Huang; Lap-Fai, Yu; Giarrè, Laura; Hsueh-Cheng, Wang
Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired-Learning from Virtual and Real Worlds / Tzu-Kuan, Chuang; Ni-Ching, Lin; Jih-Shi, Chen; Chen-Hao, Hung; Yi-Wei, Huang; Chunchih, Tengl; Haikun, Huang; Lap-Fai, Yu; Giarrè, Laura; Hsueh-Cheng, Wang. - (2018). (Intervento presentato al convegno 2018 IEEE International Conference on Robotics and Automation (ICRA) tenutosi a Brisbane, Australia nel 2018) [10.1109/ICRA.2018.8460994].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1167367
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 47
  • ???jsp.display-item.citation.isi??? 30
social impact