Interaction with everyday objects requires by the active visual system a fast and invariant reconstruction of their local shape layout, through a series of fast binocular fixation movements that change the gaze direction on the 3-dimensional surface of the object. Active binocular viewing results in complex disparity fields that, although informative about the orientation in depth (e.g., the slant and tilt), highly depend on the relative position of the eyes. Assuming to learn the statistical relationships between the differential properties of the disparity vector fields and the gaze directions, we expect to obtain more convenient, gaze-invariant visual descriptors. In this work, local approximations of disparity vector field differentials are combined in a hierarchical neural network that is trained to represent the slant and tilt from the disparity vector fields. Each gaze-related cell’s activation in the intermediate representation is recurrently merged with the other cells’ activations to gain the desired gaze-invariant selectivity. Although the representation has been tested on a limited set of combinations of slant and tilt, the resulting high classification rate validates the generalization capability of the approach.

Learning a compositional hierarchy of disparity descriptors for 3D orientation estimation in an active fixation setting / Kalou, K.; Gibaldi, A.; Canessa, A.; Sabatini, S. P.. - 10614:(2017), pp. 192-199. (Intervento presentato al convegno 26th International Conference on Artificial Neural Networks, ICANN 2017 tenutosi a Alghero, ITALY nel SEP 11-14, 2017) [10.1007/978-3-319-68612-7_22].

Learning a compositional hierarchy of disparity descriptors for 3D orientation estimation in an active fixation setting

Gibaldi A.;Canessa A.;
2017

Abstract

Interaction with everyday objects requires by the active visual system a fast and invariant reconstruction of their local shape layout, through a series of fast binocular fixation movements that change the gaze direction on the 3-dimensional surface of the object. Active binocular viewing results in complex disparity fields that, although informative about the orientation in depth (e.g., the slant and tilt), highly depend on the relative position of the eyes. Assuming to learn the statistical relationships between the differential properties of the disparity vector fields and the gaze directions, we expect to obtain more convenient, gaze-invariant visual descriptors. In this work, local approximations of disparity vector field differentials are combined in a hierarchical neural network that is trained to represent the slant and tilt from the disparity vector fields. Each gaze-related cell’s activation in the intermediate representation is recurrently merged with the other cells’ activations to gain the desired gaze-invariant selectivity. Although the representation has been tested on a limited set of combinations of slant and tilt, the resulting high classification rate validates the generalization capability of the approach.
2017
26th International Conference on Artificial Neural Networks, ICANN 2017
Alghero, ITALY
SEP 11-14, 2017
10614
192
199
Kalou, K.; Gibaldi, A.; Canessa, A.; Sabatini, S. P.
Learning a compositional hierarchy of disparity descriptors for 3D orientation estimation in an active fixation setting / Kalou, K.; Gibaldi, A.; Canessa, A.; Sabatini, S. P.. - 10614:(2017), pp. 192-199. (Intervento presentato al convegno 26th International Conference on Artificial Neural Networks, ICANN 2017 tenutosi a Alghero, ITALY nel SEP 11-14, 2017) [10.1007/978-3-319-68612-7_22].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1362483
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact