On-line Backpropagation has become very popular and it has been the subject of in-depth theoretical analyses and massive experimentation. Yet, after almost three decades from its publication, it is still surprisingly the source of tough theoretical questions and of experimental results that are somewhat shrouded in mystery. Although seriously plagued by local minima, the batch-mode version of the algorithm is clearly posed as an optimization problem while, in spite of its effectiveness, in many real-world problems the on-line mode version has not been given a clean formulation, yet. Using variational arguments, in this paper, the on-line formulation is proposed as the minimization of a classic functional that is inspired by the principle of minimal action in analytic mechanics. The proposed approach clashes sharply with common interpretations of on-line learning as an approximation of batch-mode, and it suggests that processing data all at once might be just an artificial formulation of learning that is hopeless in difficult real-world problems. © 2013 Springer-Verlag Berlin Heidelberg.

Variational foundations of online backpropagation / Frandina, Salvatore; Gori, Marco; Lippi, Marco; Maggini, Marco; Melacci, Stefano. - 8131:(2013), pp. 82-89. (Intervento presentato al convegno 23rd International Conference on Artificial Neural Networks, ICANN 2013 tenutosi a Sofia, bgr nel 2013) [10.1007/978-3-642-40728-4_11].

Variational foundations of online backpropagation

LIPPI, MARCO;
2013

Abstract

On-line Backpropagation has become very popular and it has been the subject of in-depth theoretical analyses and massive experimentation. Yet, after almost three decades from its publication, it is still surprisingly the source of tough theoretical questions and of experimental results that are somewhat shrouded in mystery. Although seriously plagued by local minima, the batch-mode version of the algorithm is clearly posed as an optimization problem while, in spite of its effectiveness, in many real-world problems the on-line mode version has not been given a clean formulation, yet. Using variational arguments, in this paper, the on-line formulation is proposed as the minimization of a classic functional that is inspired by the principle of minimal action in analytic mechanics. The proposed approach clashes sharply with common interpretations of on-line learning as an approximation of batch-mode, and it suggests that processing data all at once might be just an artificial formulation of learning that is hopeless in difficult real-world problems. © 2013 Springer-Verlag Berlin Heidelberg.
2013
23rd International Conference on Artificial Neural Networks, ICANN 2013
Sofia, bgr
2013
8131
82
89
Frandina, Salvatore; Gori, Marco; Lippi, Marco; Maggini, Marco; Melacci, Stefano
Variational foundations of online backpropagation / Frandina, Salvatore; Gori, Marco; Lippi, Marco; Maggini, Marco; Melacci, Stefano. - 8131:(2013), pp. 82-89. (Intervento presentato al convegno 23rd International Conference on Artificial Neural Networks, ICANN 2013 tenutosi a Sofia, bgr nel 2013) [10.1007/978-3-642-40728-4_11].
File in questo prodotto:
File Dimensione Formato  
llncs.pdf

Accesso riservato

Tipologia: Versione dell'autore revisionata e accettata per la pubblicazione
Dimensione 150.77 kB
Formato Adobe PDF
150.77 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1122650
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 7
social impact