Morphing attack is an important security threat for automatic face recognition systems. High-quality morphed images, i.e. images without significant visual artifacts such as ghosts, noise, and blurring, exhibit higher chances of success, being able to fool both human examiners and commercial face verification algorithms. Therefore, the availability of large sets of high-quality morphs is fundamental for training and testing robust morphing attack detection algorithms. However, producing a high-quality morphed image is an expensive and time-consuming task since manual post-processing is generally required to remove the typical artifacts generated by landmark-based morphing techniques. This work describes an approach based on the Conditional Generative Adversarial Network paradigm for automated morphing artifact retouching and the use of Attention Maps to guide the generation process and limit the retouch to specific areas. In order to work with high-resolution images, the framework is applied on different facial crops, which, once processed and retouched, are accurately blended to reconstruct the whole morphed face. Specifically, we focus on four different squared face regions, i.e. the right and left eyes, the nose, and the mouth, that are frequently affected by artifacts. Several qualitative and quantitative experimental evaluations have been conducted to confirm the effectiveness of the proposal in terms of, among the others, pixel-wise metrics, identity preservation, and human observer analysis. Results confirm the feasibility and the accuracy of the proposed framework.

Automated Artifact Retouching in Morphed Images with Attention Maps / Borghi, G.; Franco, A.; Graffieti, G.; Maltoni, D.. - In: IEEE ACCESS. - ISSN 2169-3536. - 9:(2021), pp. 136561-136579. [10.1109/ACCESS.2021.3117718]

Automated Artifact Retouching in Morphed Images with Attention Maps

Borghi G.;
2021

Abstract

Morphing attack is an important security threat for automatic face recognition systems. High-quality morphed images, i.e. images without significant visual artifacts such as ghosts, noise, and blurring, exhibit higher chances of success, being able to fool both human examiners and commercial face verification algorithms. Therefore, the availability of large sets of high-quality morphs is fundamental for training and testing robust morphing attack detection algorithms. However, producing a high-quality morphed image is an expensive and time-consuming task since manual post-processing is generally required to remove the typical artifacts generated by landmark-based morphing techniques. This work describes an approach based on the Conditional Generative Adversarial Network paradigm for automated morphing artifact retouching and the use of Attention Maps to guide the generation process and limit the retouch to specific areas. In order to work with high-resolution images, the framework is applied on different facial crops, which, once processed and retouched, are accurately blended to reconstruct the whole morphed face. Specifically, we focus on four different squared face regions, i.e. the right and left eyes, the nose, and the mouth, that are frequently affected by artifacts. Several qualitative and quantitative experimental evaluations have been conducted to confirm the effectiveness of the proposal in terms of, among the others, pixel-wise metrics, identity preservation, and human observer analysis. Results confirm the feasibility and the accuracy of the proposed framework.
2021
9
136561
136579
Automated Artifact Retouching in Morphed Images with Attention Maps / Borghi, G.; Franco, A.; Graffieti, G.; Maltoni, D.. - In: IEEE ACCESS. - ISSN 2169-3536. - 9:(2021), pp. 136561-136579. [10.1109/ACCESS.2021.3117718]
Borghi, G.; Franco, A.; Graffieti, G.; Maltoni, D.
File in questo prodotto:
File Dimensione Formato  
Automated_Artifact_Retouching_in_Morphed_Images_With_Attention_Maps.pdf

Open access

Tipologia: Versione pubblicata dall'editore
Dimensione 4.63 MB
Formato Adobe PDF
4.63 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1339405
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 9
social impact