Conference paper

ZEINALI Hossein, BURGET Lukáš, ROHDIN Johan A., STAFYLAKIS Themos and ČERNOCKÝ Jan. How To Improve Your Speaker Embeddings Extractor in Generic Toolkits. In: Proceedings of ICASSP 2019. Brighton: IEEE Signal Processing Society, 2019, pp. 6141-6145. ISBN 978-1-5386-4658-8. Available from:
Publication language:english
Original title:How To Improve Your Speaker Embeddings Extractor in Generic Toolkits
Title (cs):Jak zlepšit Váš extraktor embeddingů mluvčích v běžných toolkitech
Proceedings:Proceedings of ICASSP 2019
Conference:International Conference on Acoustics, Speech, and Signal Processing
Place:Brighton, GB
Publisher:IEEE Signal Processing Society
Deep neural network, speaker embedding, xvector, Tensorflow, Kaldi.
Recently, speaker embeddings extracted with deep neural networks became the state-of-the-art method for speaker verification. In this paper we aim to facilitate its implementation on a more generic toolkit than Kaldi, which we anticipate to enable further improvements on the method. We examine several tricks in training, such as the effects of normalizing input features and pooled statistics, different methods for preventing overfitting as well as alternative nonlinearities that can be used instead of Rectifier Linear Units. In addition, we investigate the difference in performance between TDNN and CNN, and between two types of attention mechanism. Experimental results on Speaker in the Wild, SRE 2016 and SRE 2018 datasets demonstrate the effectiveness of the proposed implementation.
   author = {Hossein Zeinali and Luk{\'{a}}{\v{s}} Burget and
	A. Johan Rohdin and Themos Stafylakis and Jan
   title = {How To Improve Your Speaker Embeddings Extractor
	in Generic Toolkits},
   pages = {6141--6145},
   booktitle = {Proceedings of ICASSP 2019},
   year = 2019,
   location = {Brighton, GB},
   publisher = {IEEE Signal Processing Society},
   ISBN = {978-1-5386-4658-8},
   language = {english},
   url = {}

Your IPv4 address:
Switch to https