In this paper, we address the problem of enhancing the speech of a speaker of interest in a cocktail party scenario when visual information of the speaker of interest is available. Contrary to most previous studies, we do not learn visual features on the typically small audio-visual datasets, but use an already available face landmark detector (trained on a separate image dataset). The landmarks are used by LSTM-based models to generate time-frequency masks which are applied to the acoustic mixed-speech spectrogram. Results show that: (i) landmark motion features are very effective features for this task, (ii) similarly to previous work, reconstruction of the target speaker's spectrogram mediated by masking is significantly more accurate than direct spectrogram reconstruction, and (iii) the best masks depend on both motion landmark features and the input mixed-speech spectrogram. To the best of our knowledge, our proposed models are the first models trained and evaluated on the limited size GRID and TCD-TIMIT datasets, that achieve speaker-independent speech enhancement in a multi-talker setting.

Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments / Morrone, Giovanni; Pasa, Luca; Tikhanoff, Vadim; Bergamaschi, Sonia; Fadiga, Luciano; Badino, Leonardo. - (2019). (Intervento presentato al convegno 44th IEEE International Conference on Acoustics, Speech and Signal Processing tenutosi a Brighton, UK nel 12-17 May, 2019) [10.1109/ICASSP.2019.8682061].

Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments

Giovanni Morrone;Sonia Bergamaschi;
2019

Abstract

In this paper, we address the problem of enhancing the speech of a speaker of interest in a cocktail party scenario when visual information of the speaker of interest is available. Contrary to most previous studies, we do not learn visual features on the typically small audio-visual datasets, but use an already available face landmark detector (trained on a separate image dataset). The landmarks are used by LSTM-based models to generate time-frequency masks which are applied to the acoustic mixed-speech spectrogram. Results show that: (i) landmark motion features are very effective features for this task, (ii) similarly to previous work, reconstruction of the target speaker's spectrogram mediated by masking is significantly more accurate than direct spectrogram reconstruction, and (iii) the best masks depend on both motion landmark features and the input mixed-speech spectrogram. To the best of our knowledge, our proposed models are the first models trained and evaluated on the limited size GRID and TCD-TIMIT datasets, that achieve speaker-independent speech enhancement in a multi-talker setting.
2019
17-apr-2019
44th IEEE International Conference on Acoustics, Speech and Signal Processing
Brighton, UK
12-17 May, 2019
Morrone, Giovanni; Pasa, Luca; Tikhanoff, Vadim; Bergamaschi, Sonia; Fadiga, Luciano; Badino, Leonardo
Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments / Morrone, Giovanni; Pasa, Luca; Tikhanoff, Vadim; Bergamaschi, Sonia; Fadiga, Luciano; Badino, Leonardo. - (2019). (Intervento presentato al convegno 44th IEEE International Conference on Acoustics, Speech and Signal Processing tenutosi a Brighton, UK nel 12-17 May, 2019) [10.1109/ICASSP.2019.8682061].
File in questo prodotto:
File Dimensione Formato  
final_paper_icassp2019.pdf

Open access

Descrizione: Articolo principale
Tipologia: Versione originale dell'autore proposta per la pubblicazione
Dimensione 284.17 kB
Formato Adobe PDF
284.17 kB Adobe PDF Visualizza/Apri
final_paper_reviewed_icassp2019_v2.pdf

Accesso riservato

Descrizione: Articolo principale
Tipologia: Versione pubblicata dall'editore
Dimensione 332.66 kB
Formato Adobe PDF
332.66 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Morrone_ICASSP_Poster.pdf

Open access

Descrizione: Poster
Tipologia: Altro
Dimensione 1.12 MB
Formato Adobe PDF
1.12 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1170465
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 45
  • ???jsp.display-item.citation.isi??? 34
social impact