Conference paper

PEŠÁN Jan, BURGET Lukáš and ČERNOCKÝ Jan. Sequence Summarizing Neural Networks for Spoken Language Recognition. In: Proceedings of Interspeech 2016. San Francisco: International Speech Communication Association, 2016, pp. 3285-3289. ISBN 978-1-5108-3313-5. Available from: https://www.researchgate.net/publication/307889421_Sequence_Summarizing_Neural_Networks_for_Spoken_Language_Recognition
Publication language:english
Original title:Sequence Summarizing Neural Networks for Spoken Language Recognition
Title (cs):Sekvenční sumarizační neuronové sítě pro rozpoznávání mluveného jazyka
Pages:3285-3289
Proceedings:Proceedings of Interspeech 2016
Conference:Interspeech 2016
Place:San Francisco, US
Year:2016
URL:https://www.researchgate.net/publication/307889421_Sequence_Summarizing_Neural_Networks_for_Spoken_Language_Recognition
ISBN:978-1-5108-3313-5
Publisher:International Speech Communication Association
URL:http://www.fit.vutbr.cz/research/groups/speech/publi/2016/pesan_interspeech2016_IS160764.pdf [PDF]
Files: 
+Type Name Title Size Last modified
iconpesan_interspeech2016_IS160764.pdf234 KB2016-09-29 18:43:13
^ Select all
With selected:
Keywords
Sequence Summarizing Neural Network, DNN, i-vectors
Annotation
This paper explores the use of Sequence Summarizing Neural Networks (SSNNs) as a variant of deep neural networks (DNNs) for classifying sequences. In this work, it is applied to the task of spoken language recognition. Unlike other classification tasks in speech processing where the DNN needs to produce a per-frame output, language is considered constant during an utterance. We introduce a summarization component into the DNN structure producing one set of language posteriors per utterance. The training of the DNN is performed by an appropriately modified gradient-descent algorithm. In our initial experiments, the SSNN results are compared to a single state-of-the-art i-vector based baseline system with a similar complexity (i.e. no system fusion, etc.). For some conditions, SSNNs is able to provide performance comparable to the baseline system. Relative improvement up to 30% is obtained with the score level fusion of the baseline and the SSNN systems.
Abstract
This paper explores the use of Sequence Summarizing Neural Networks (SSNNs) as a variant of deep neural networks (DNNs) for classifying sequences. In this work, it is applied to the task of spoken language recognition. Unlike other classification tasks in speech processing where the DNN needs to produce a per-frame output, language is considered constant during an utterance. We introduce a summarization component into the DNN structure producing one set of language posteriors per utterance. The training of the DNN is performed by an appropriately modified gradient-descent algorithm. In our initial experiments, the SSNN results are compared to a single state-of-the-art i-vector based baseline system with a similar complexity (i.e. no system fusion, etc.). For some conditions, SSNNs is able to provide performance comparable to the baseline system. Relative improvement up to 30% is obtained with the score level fusion of the baseline and the SSNN systems.
BibTeX:
@INPROCEEDINGS{
   author = {Jan Pe{\v{s}}{\'{a}}n and Luk{\'{a}}{\v{s}} Burget and Jan
	{\v{C}}ernock{\'{y}}},
   title = {Sequence Summarizing Neural Networks for Spoken Language
	Recognition},
   pages = {3285--3289},
   booktitle = {Proceedings of Interspeech 2016},
   year = {2016},
   location = {San Francisco, US},
   publisher = {International Speech Communication Association},
   ISBN = {978-1-5108-3313-5},
   language = {english},
   url = {http://www.fit.vutbr.cz/research/view_pub.php.en.iso-8859-2?id=11273}
}

Your IPv4 address: 54.166.203.76
Switch to IPv6 connection

DNSSEC [dnssec]