We show how a cortical model of early disparity detectors is able to autonomously learn effective control signals in order to drive the vergence eye movements of a binocular active vision system. The proposed approach employs early binocular mechanisms of vision and basic learning processes such as synaptic plasticity and reward modulation. The computational substrate consists of a population of modeled V1 complex cells, that provides a distributed representation of binocular disparity information. The population response also provides a global signal to describe the state of the system and thus its deviation from the desired vergence position. The proposed network, by taking into account the modification of its internal state as a consequence of the action performed, evolves following a differential Hebbian rule. Furthermore, the weights update is driven by an intrinsic signal derived by the overall activity of the population. Exploiting this signal implies a maximization of the population activity itself, thus providing an highly effective reward for the developing of a stable and accurate vergence behaviour. The efficacy of the proposed intrinsic reward signal is comparatively assessed against the ground-truth signal (the actual disparity) providing equivalent results, and thus validating the approach. Experimental tests in a simulated environment demonstrate that the proposed network is able to cope with vergent geometry and thus to learn effective vergence movements for static and moving visual targets in realistic situations. © 2013 IEEE.
Population coding for a reward-modulated Hebbian learning of vergence control / Gibaldi, A.; Canessa, A.; Chessa, M.; Solari, F.; Sabatini, S. P.. - (2013), pp. 1-8. (Intervento presentato al convegno 2013 International Joint Conference on Neural Networks, IJCNN 2013 tenutosi a Dallas, TX, usa nel AUG 04-09, 2013) [10.1109/IJCNN.2013.6706821].
Population coding for a reward-modulated Hebbian learning of vergence control
Gibaldi A.;Canessa A.;
2013
Abstract
We show how a cortical model of early disparity detectors is able to autonomously learn effective control signals in order to drive the vergence eye movements of a binocular active vision system. The proposed approach employs early binocular mechanisms of vision and basic learning processes such as synaptic plasticity and reward modulation. The computational substrate consists of a population of modeled V1 complex cells, that provides a distributed representation of binocular disparity information. The population response also provides a global signal to describe the state of the system and thus its deviation from the desired vergence position. The proposed network, by taking into account the modification of its internal state as a consequence of the action performed, evolves following a differential Hebbian rule. Furthermore, the weights update is driven by an intrinsic signal derived by the overall activity of the population. Exploiting this signal implies a maximization of the population activity itself, thus providing an highly effective reward for the developing of a stable and accurate vergence behaviour. The efficacy of the proposed intrinsic reward signal is comparatively assessed against the ground-truth signal (the actual disparity) providing equivalent results, and thus validating the approach. Experimental tests in a simulated environment demonstrate that the proposed network is able to cope with vergent geometry and thus to learn effective vergence movements for static and moving visual targets in realistic situations. © 2013 IEEE.File | Dimensione | Formato | |
---|---|---|---|
Population_coding_for_a_reward-modulated_Hebbian_learning_of_vergence_control.pdf
Accesso riservato
Tipologia:
Versione pubblicata dall'editore
Dimensione
473.72 kB
Formato
Adobe PDF
|
473.72 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris