Using richly parameterised models for small datasetscan be justified from a theoretical point of viewaccording to some results due to Bartlett which show that thegeneralization performance of a multi layer perceptron (MLP)depends more on the L1 norm of the weightsbetween the hidden and the output layer rather than on thenumber of parameters in the model.In this paper we investigate the problem of measuring the generalizationperformance and the complexity of richly parameterised procedures and,drawing on linear model theory, we propose a different notion of degrees of freedomto neural networks and other projection tools. This notion is compatible withsimilar ideas long associated with smoothers based models(like projection pursuit regression) and can be interpreted using the projectiontheory of linear models and showing some geometrical properties of neural networks.Results in this study lead to corrections insome goodness-of-fit statistics like AIC, BIC/SBC:the number of degrees of freedom in theseindexes are set equal to the dimension p of the projection spaceintrinsically found by the mapping function.An empirical study is presented in order toillustrate the behavior of the valuesof some selection model criteria.

On the degrees of freedom in richly parameterized models / S., Ingrassia; Morlini, Isabella. - STAMPA. - (2004), pp. 1237-1244. (Intervento presentato al convegno Computational Statistics: 16th Symposium Held in Prague tenutosi a Prague, Czech Republic nel 23-27 Agosto 2004).

On the degrees of freedom in richly parameterized models

MORLINI, Isabella
2004

Abstract

Using richly parameterised models for small datasetscan be justified from a theoretical point of viewaccording to some results due to Bartlett which show that thegeneralization performance of a multi layer perceptron (MLP)depends more on the L1 norm of the weightsbetween the hidden and the output layer rather than on thenumber of parameters in the model.In this paper we investigate the problem of measuring the generalizationperformance and the complexity of richly parameterised procedures and,drawing on linear model theory, we propose a different notion of degrees of freedomto neural networks and other projection tools. This notion is compatible withsimilar ideas long associated with smoothers based models(like projection pursuit regression) and can be interpreted using the projectiontheory of linear models and showing some geometrical properties of neural networks.Results in this study lead to corrections insome goodness-of-fit statistics like AIC, BIC/SBC:the number of degrees of freedom in theseindexes are set equal to the dimension p of the projection spaceintrinsically found by the mapping function.An empirical study is presented in order toillustrate the behavior of the valuesof some selection model criteria.
2004
Computational Statistics: 16th Symposium Held in Prague
Prague, Czech Republic
23-27 Agosto 2004
1237
1244
S., Ingrassia; Morlini, Isabella
On the degrees of freedom in richly parameterized models / S., Ingrassia; Morlini, Isabella. - STAMPA. - (2004), pp. 1237-1244. (Intervento presentato al convegno Computational Statistics: 16th Symposium Held in Prague tenutosi a Prague, Czech Republic nel 23-27 Agosto 2004).
File in questo prodotto:
File Dimensione Formato  
Proceedings-COMPSTAT-2004.pdf

Open access

Tipologia: Versione pubblicata dall'editore
Dimensione 32.19 MB
Formato Adobe PDF
32.19 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/465827
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact