Publication Details

Learning Speaker Representation for Neural Network Based Multichannel Speaker Extraction

ŽMOLÍKOVÁ Kateřina, DELCROIX Marc, KINOSHITA Keisuke, HIGUCHI Takuya, OGAWA Atsunori and NAKATANI Tomohiro. Learning Speaker Representation for Neural Network Based Multichannel Speaker Extraction. In: Proceedings of ASRU 2017. Okinawa: IEEE Signal Processing Society, 2017, pp. 8-15. ISBN 978-1-5090-4788-8.
Czech title
Učení reprezentací řečníků pro vícekanálovou extrakci jednoho řečníka založenou na neuronových sítích
Type
conference paper
Language
english
Authors
Žmolíková Kateřina, Ing., Ph.D. (DCGM FIT BUT)
Delcroix Marc (NTT)
Kinoshita Keisuke (NTT)
Higuchi Takuya (NTT)
Ogawa Atsunori (NTT)
Nakatani Tomohiro (NTT)
URL
Keywords

speaker extraction, speaker adaptive neural network, multi-speaker speech recognition, speaker representation learning, beamforming

Abstract

Recently, schemes employing deep neural networks (DNNs) for extracting speech from noisy observation have demonstrated great potential for noise robust automatic speech recognition. However, these schemes are not well suited when the interfering noise is another speaker. To enable extracting a target speaker from a mixture of speakers, we have recently proposed to inform the neural network using speaker information extracted from an adaptation utterance from the same speaker. In our previous work, we explored ways how to inform the network about the speaker and found a speaker adaptive layer approach to be suitable for this task. In our experiments, we used speaker features designed for speaker recognition tasks as the additional speaker information, which may not be optimal for the speaker extraction task. In this paper, we propose a usage of a sequence summarizing scheme enabling to learn the speaker representation jointly with the network. Furthermore, we extend the previous experiments to demonstrate the potential of our proposed method as a front-end for speech recognition and explore the effect of additional noise on the performance of the method.

Annotation

Recently, schemes employing deep neural networks (DNNs) for extracting speech from noisy observation have demonstrated great potential for noise robust automatic speech recognition. However, these schemes are not well suited when the interfering noise is another speaker. To enable extracting a target speaker from a mixture of speakers, we have recently proposed to inform the neural network using speaker information extracted from an adaptation utterance from the same speaker. In our previous work, we explored ways how to inform the network about the speaker and found a speaker adaptive layer approach to be suitable for this task. In our experiments, we used speaker features designed for speaker recognition tasks as the additional speaker information, which may not be optimal for the speaker extraction task. In this paper, we propose a usage of a sequence summarizing scheme enabling to learn the speaker representation jointly with the network. Furthermore, we extend the previous experiments to demonstrate the potential of our proposed method as a front-end for speech recognition and explore the effect of additional noise on the performance of the method.

Published
2017
Pages
8-15
Proceedings
Proceedings of ASRU 2017
Conference
2017 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), Okinawa, JP
ISBN
978-1-5090-4788-8
Publisher
IEEE Signal Processing Society
Place
Okinawa, JP
DOI
UT WoS
000426066100002
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB11596,
   author = "Kate\v{r}ina \v{Z}mol\'{i}kov\'{a} and Marc Delcroix and Keisuke Kinoshita and Takuya Higuchi and Atsunori Ogawa and Tomohiro Nakatani",
   title = "Learning Speaker Representation for Neural Network Based Multichannel Speaker Extraction",
   pages = "8--15",
   booktitle = "Proceedings of ASRU 2017",
   year = 2017,
   location = "Okinawa, JP",
   publisher = "IEEE Signal Processing Society",
   ISBN = "978-1-5090-4788-8",
   doi = "10.1109/ASRU.2017.8268910",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/11596"
}
Back to top