Event cameras are biologically-inspired sensors that gather the temporal evolution of the scene. They capture pixel-wise brightness variations and output a corresponding stream of asynchronous events. Despite having multiple advantages with respect to traditional cameras, their use is partially prevented by the limited applicability of traditional data processing and vision algorithms. To this aim, we present a framework which exploits the output stream of event cameras to synthesize RGB frames, relying on an initial or a periodic set of color key-frames and the sequence of intermediate events. Differently from existing work, we propose a deep learning-based frame synthesis method, consisting of an adversarial architecture combined with a recurrent module. Qualitative results and quantitative per-pixel, perceptual, and semantic evaluation on four public datasets confirm the quality of the synthesized images.

Learn to See by Events: Color Frame Synthesis from Event and RGB Cameras / Pini, Stefano; Borghi, Guido; Vezzani, Roberto. - 4:(2020), pp. 37-47. (Intervento presentato al convegno International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications tenutosi a Valletta (Malta) nel 27-29 February 2020) [10.5220/0008934700370047].

Learn to See by Events: Color Frame Synthesis from Event and RGB Cameras

Stefano Pini;Guido Borghi;Roberto Vezzani
2020

Abstract

Event cameras are biologically-inspired sensors that gather the temporal evolution of the scene. They capture pixel-wise brightness variations and output a corresponding stream of asynchronous events. Despite having multiple advantages with respect to traditional cameras, their use is partially prevented by the limited applicability of traditional data processing and vision algorithms. To this aim, we present a framework which exploits the output stream of event cameras to synthesize RGB frames, relying on an initial or a periodic set of color key-frames and the sequence of intermediate events. Differently from existing work, we propose a deep learning-based frame synthesis method, consisting of an adversarial architecture combined with a recurrent module. Qualitative results and quantitative per-pixel, perceptual, and semantic evaluation on four public datasets confirm the quality of the synthesized images.
2020
feb-2020
International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Valletta (Malta)
27-29 February 2020
4
37
47
Pini, Stefano; Borghi, Guido; Vezzani, Roberto
Learn to See by Events: Color Frame Synthesis from Event and RGB Cameras / Pini, Stefano; Borghi, Guido; Vezzani, Roberto. - 4:(2020), pp. 37-47. (Intervento presentato al convegno International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications tenutosi a Valletta (Malta) nel 27-29 February 2020) [10.5220/0008934700370047].
File in questo prodotto:
File Dimensione Formato  
VISAPP20_.pdf

Open access

Tipologia: Versione originale dell'autore proposta per la pubblicazione
Dimensione 6.42 MB
Formato Adobe PDF
6.42 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1185831
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 6
social impact