Scenarios in which restrictions in data transfer and storage limit the possibility to compose a single dataset – also exploiting different data sources – to perform a batch-based training procedure, make the development of robust models particularly challenging. We hypothesize that the recent Continual Learning (CL) paradigm may represent an effective solution to enable incremental training, even through multiple sites. Indeed, a basic assumption of CL is that once a model has been trained, old data can no longer be used in successive training iterations and in principle can be deleted. Therefore, in this paper, we investigate the performance of different Continual Learning methods in this scenario, simulating a learning model that is updated every time a new chunk of data, even of variable size, is available. Experimental results reveal that a particular CL method, namely Learning without Forgetting (LwF), is one of the best-performing algorithms. Then, we investigate its usage and parametrization in Morphing Attack Detection and Object Classification tasks, specifically with respect to the amount of new training data that became available.

Detecting Morphing Attacks via Continual Incremental Training / Pellegrini, Lorenzo; Borghi, Guido; Franco, Annalisa; Maltoni, Davide. - (2023), pp. 1-9. (Intervento presentato al convegno 2023 IEEE International Joint Conference on Biometrics, IJCB 2023 tenutosi a Ljubiana, SLOVENIA nel 25/09/2023) [10.1109/IJCB57857.2023.10449306].

Detecting Morphing Attacks via Continual Incremental Training

Guido Borghi;
2023

Abstract

Scenarios in which restrictions in data transfer and storage limit the possibility to compose a single dataset – also exploiting different data sources – to perform a batch-based training procedure, make the development of robust models particularly challenging. We hypothesize that the recent Continual Learning (CL) paradigm may represent an effective solution to enable incremental training, even through multiple sites. Indeed, a basic assumption of CL is that once a model has been trained, old data can no longer be used in successive training iterations and in principle can be deleted. Therefore, in this paper, we investigate the performance of different Continual Learning methods in this scenario, simulating a learning model that is updated every time a new chunk of data, even of variable size, is available. Experimental results reveal that a particular CL method, namely Learning without Forgetting (LwF), is one of the best-performing algorithms. Then, we investigate its usage and parametrization in Morphing Attack Detection and Object Classification tasks, specifically with respect to the amount of new training data that became available.
2023
2023 IEEE International Joint Conference on Biometrics, IJCB 2023
Ljubiana, SLOVENIA
25/09/2023
1
9
Pellegrini, Lorenzo; Borghi, Guido; Franco, Annalisa; Maltoni, Davide
Detecting Morphing Attacks via Continual Incremental Training / Pellegrini, Lorenzo; Borghi, Guido; Franco, Annalisa; Maltoni, Davide. - (2023), pp. 1-9. (Intervento presentato al convegno 2023 IEEE International Joint Conference on Biometrics, IJCB 2023 tenutosi a Ljubiana, SLOVENIA nel 25/09/2023) [10.1109/IJCB57857.2023.10449306].
File in questo prodotto:
File Dimensione Formato  
Detecting Morphing Attacks via Continual Incremental Training.pdf

Open access

Tipologia: Versione dell'autore revisionata e accettata per la pubblicazione
Dimensione 592.32 kB
Formato Adobe PDF
592.32 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1339392
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact