Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computationalmodels into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.
A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot / Antonelli, M.; Gibaldi, A.; Beuth, F.; Duran, A. J.; Canessa, A.; Chessa, M.; Solari, F.; Del Pobil, A. P.; Hamker, F.; Chinellato, E.; Sabatini, S. P.. - In: IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT. - ISSN 1943-0604. - 6:4(2014), pp. 259-273. [10.1109/TAMD.2014.2332875]
A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot
Antonelli M.;Gibaldi A.;Canessa A.;
2014
Abstract
Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computationalmodels into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.File | Dimensione | Formato | |
---|---|---|---|
A_Hierarchical_System_for_a_Distributed_Representation_of_the_Peripersonal_Space_of_a_Humanoid_Robot.pdf
Accesso riservato
Tipologia:
Versione pubblicata dall'editore
Dimensione
2.14 MB
Formato
Adobe PDF
|
2.14 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris