Creating high-quality and realistic images is now possible thanks to the impressive advancements in image generation. A description in natural language of your desired output is all you need to obtain breathtaking results. However, as the use of generative models grows, so do concerns about the propagation of malicious content and misinformation. Consequently, the research community is actively working on the development of novel fake detection techniques, primarily focusing on low-level features and possible fingerprints left by generative models during the image generation process. In a different vein, in our work, we leverage human semantic knowledge to investigate the possibility of being included in frameworks of fake image detection. To achieve this, we collect a novel dataset of partially manipulated images using diffusion models and conduct an eye-tracking experiment to record the eye movements of different observers while viewing real and fake stimuli. A preliminary statistical analysis is conducted to explore the distinctive patterns in how humans perceive genuine and altered images. Statistical findings reveal that, when perceiving counterfeit samples, humans tend to focus on more confined regions of the image, in contrast to the more dispersed observational pattern observed when viewing genuine images. Our dataset is publicly available at: https://github.com/aimagelab/unveiling-the-truth.

Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images / Cartella, Giuseppe; Cuculo, Vittorio; Cornia, Marcella; Cucchiara, Rita. - In: IEEE SIGNAL PROCESSING LETTERS. - ISSN 1070-9908. - 31:(2024), pp. 1-5. [10.1109/LSP.2024.3375288]

Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images

Giuseppe Cartella;Vittorio Cuculo;Marcella Cornia;Rita Cucchiara
2024

Abstract

Creating high-quality and realistic images is now possible thanks to the impressive advancements in image generation. A description in natural language of your desired output is all you need to obtain breathtaking results. However, as the use of generative models grows, so do concerns about the propagation of malicious content and misinformation. Consequently, the research community is actively working on the development of novel fake detection techniques, primarily focusing on low-level features and possible fingerprints left by generative models during the image generation process. In a different vein, in our work, we leverage human semantic knowledge to investigate the possibility of being included in frameworks of fake image detection. To achieve this, we collect a novel dataset of partially manipulated images using diffusion models and conduct an eye-tracking experiment to record the eye movements of different observers while viewing real and fake stimuli. A preliminary statistical analysis is conducted to explore the distinctive patterns in how humans perceive genuine and altered images. Statistical findings reveal that, when perceiving counterfeit samples, humans tend to focus on more confined regions of the image, in contrast to the more dispersed observational pattern observed when viewing genuine images. Our dataset is publicly available at: https://github.com/aimagelab/unveiling-the-truth.
2024
mar-2024
31
1
5
Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images / Cartella, Giuseppe; Cuculo, Vittorio; Cornia, Marcella; Cucchiara, Rita. - In: IEEE SIGNAL PROCESSING LETTERS. - ISSN 1070-9908. - 31:(2024), pp. 1-5. [10.1109/LSP.2024.3375288]
Cartella, Giuseppe; Cuculo, Vittorio; Cornia, Marcella; Cucchiara, Rita
File in questo prodotto:
File Dimensione Formato  
2024_IEEE_SPL.pdf

Accesso riservato

Tipologia: Versione pubblicata dall'editore
Dimensione 935.43 kB
Formato Adobe PDF
935.43 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1334366
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact