Neural network modeling for small datasets can be justified from a theoretical point of view according to some of Bartlett's results showing that the generalization performance of a multilayer perceptron (MLP) depends more on the L-1 norm parallel to c parallel to(1) of the weights between the hidden layer and the output layer rather than on the total number of weights. In this article we investigate some geometrical properties of MLPs and drawing on linear projection theory, we propose an equivalent number of degrees of freedom to be used in neural model selection criteria like the Akaike information criterion and the Bayes information criterion and in the unbiased estimation of the error variance. This measure proves to be much smaller than the total number of parameters of the network usually adopted, and it does not depend on the number of input variables. Moreover, this concept is compatible with Bartlett's results and with similar ideas long associated with projection-based models and kernel models. Some numerical studies involving both real and simulated datasets are presented and discussed.

Neural network modeling for small datasets / Ingrassia, S; Morlini, Isabella. - In: TECHNOMETRICS. - ISSN 0040-1706. - STAMPA. - 47:(2005), pp. 297-311. [10.1198/004017005000000058]

Neural network modeling for small datasets

MORLINI, Isabella
2005

Abstract

Neural network modeling for small datasets can be justified from a theoretical point of view according to some of Bartlett's results showing that the generalization performance of a multilayer perceptron (MLP) depends more on the L-1 norm parallel to c parallel to(1) of the weights between the hidden layer and the output layer rather than on the total number of weights. In this article we investigate some geometrical properties of MLPs and drawing on linear projection theory, we propose an equivalent number of degrees of freedom to be used in neural model selection criteria like the Akaike information criterion and the Bayes information criterion and in the unbiased estimation of the error variance. This measure proves to be much smaller than the total number of parameters of the network usually adopted, and it does not depend on the number of input variables. Moreover, this concept is compatible with Bartlett's results and with similar ideas long associated with projection-based models and kernel models. Some numerical studies involving both real and simulated datasets are presented and discussed.
2005
47
297
311
Neural network modeling for small datasets / Ingrassia, S; Morlini, Isabella. - In: TECHNOMETRICS. - ISSN 0040-1706. - STAMPA. - 47:(2005), pp. 297-311. [10.1198/004017005000000058]
Ingrassia, S; Morlini, Isabella
File in questo prodotto:
File Dimensione Formato  
2005 Technometrics.pdf

Accesso riservato

Tipologia: Versione pubblicata dall'editore
Dimensione 1.39 MB
Formato Adobe PDF
1.39 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/3439
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 58
  • ???jsp.display-item.citation.isi??? 42
social impact