Objective. In the theoretical framework of predictive coding and active inference, the brain can be viewed as instantiating a rich generative model of the world that predicts incoming sensory data while continuously updating its parameters via minimization of prediction errors. While this theory has been successfully applied to cognitive processes-by modelling the activity of functional neural networks at a mesoscopic scale-the validity of the approach when modelling neurons as an ensemble of inferring agents, in a biologically plausible architecture, remained to be explored.Approach.We modelled a simplified cerebellar circuit with individual neurons acting as Bayesian agents to simulate the classical delayed eyeblink conditioning protocol. Neurons and synapses adjusted their activity to minimize their prediction error, which was used as the network cost function. This cerebellar network was then implemented in hardware by replicating digital neuronal elements via a low-power microcontroller.Main results. Persistent changes of synaptic strength-that mirrored neurophysiological observations-emerged via local (neurocentric) prediction error minimization, leading to the expression of associative learning. The same paradigm was effectively emulated in low-power hardware showing remarkably efficient performance compared to conventional neuromorphic architectures.Significance. These findings show that: (a) an ensemble of free energy minimizing neurons-organized in a biological plausible architecture-can recapitulate functional self-organization observed in nature, such as associative plasticity, and (b) a neuromorphic network of inference units can learn unsupervised tasks without embedding predefined learning rules in the circuit, thus providing a potential avenue to a novel form of brain-inspired artificial intelligence.

Emergence of associative learning in a neuromorphic inference network / Gandolfi, D.; Puglisi, F. M.; Boiani, G. M.; Pagnoni, G.; Friston, K. J.; D'Angelo, E.; Mapelli, J.. - In: JOURNAL OF NEURAL ENGINEERING. - ISSN 1741-2552. - 19:3(2022), pp. N/A-N/A. [10.1088/1741-2552/ac6ca7]

Emergence of associative learning in a neuromorphic inference network

Gandolfi D.;Puglisi F. M.;Pagnoni G.;Mapelli J.
2022

Abstract

Objective. In the theoretical framework of predictive coding and active inference, the brain can be viewed as instantiating a rich generative model of the world that predicts incoming sensory data while continuously updating its parameters via minimization of prediction errors. While this theory has been successfully applied to cognitive processes-by modelling the activity of functional neural networks at a mesoscopic scale-the validity of the approach when modelling neurons as an ensemble of inferring agents, in a biologically plausible architecture, remained to be explored.Approach.We modelled a simplified cerebellar circuit with individual neurons acting as Bayesian agents to simulate the classical delayed eyeblink conditioning protocol. Neurons and synapses adjusted their activity to minimize their prediction error, which was used as the network cost function. This cerebellar network was then implemented in hardware by replicating digital neuronal elements via a low-power microcontroller.Main results. Persistent changes of synaptic strength-that mirrored neurophysiological observations-emerged via local (neurocentric) prediction error minimization, leading to the expression of associative learning. The same paradigm was effectively emulated in low-power hardware showing remarkably efficient performance compared to conventional neuromorphic architectures.Significance. These findings show that: (a) an ensemble of free energy minimizing neurons-organized in a biological plausible architecture-can recapitulate functional self-organization observed in nature, such as associative plasticity, and (b) a neuromorphic network of inference units can learn unsupervised tasks without embedding predefined learning rules in the circuit, thus providing a potential avenue to a novel form of brain-inspired artificial intelligence.
19
3
N/A
N/A
Emergence of associative learning in a neuromorphic inference network / Gandolfi, D.; Puglisi, F. M.; Boiani, G. M.; Pagnoni, G.; Friston, K. J.; D'Angelo, E.; Mapelli, J.. - In: JOURNAL OF NEURAL ENGINEERING. - ISSN 1741-2552. - 19:3(2022), pp. N/A-N/A. [10.1088/1741-2552/ac6ca7]
Gandolfi, D.; Puglisi, F. M.; Boiani, G. M.; Pagnoni, G.; Friston, K. J.; D'Angelo, E.; Mapelli, J.
File in questo prodotto:
File Dimensione Formato  
Gandolfi_2022_J._Neural_Eng._19_036022 (1).pdf

accesso aperto

Tipologia: Versione dell'editore (versione pubblicata)
Dimensione 2.87 MB
Formato Adobe PDF
2.87 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1279491
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact