Conference paper

BURGET Lukáš, SCHWARZ Petr, AGARWAL Mohit, AKYAZI Pinar, FENG Kai, GHOSHAL Arnab, GLEMBEK Ondřej, GOEL Nagendra K., KARAFIÁT Martin, POVEY Daniel, RASTROW Ariya, ROSE Richard and THOMAS Samuel. Multilingual acoustic modeling for speech recognition based on Subspace Gaussian Mixture Models. In: Proc. International Conference on Acoustictics, Speech, and Signal Processing. Dallas: IEEE Signal Processing Society, 2010, pp. 4334-4337. ISBN 978-1-4244-4296-6. ISSN 1520-6149.
Publication language:english
Original title:Multilingual acoustic modeling for speech recognition based on Subspace Gaussian Mixture Models
Title (cs):Multilingvální akustické modelování pro rozpoznávání řeči založené na sub-space Gaussovských modelech
Pages:4334-4337
Proceedings:Proc. International Conference on Acoustictics, Speech, and Signal Processing
Conference:International Conference on Acoustics, Speech, and Signal Processing 2010
Place:Dallas, US
Year:2010
ISBN:978-1-4244-4296-6
Journal:Proc. International Conference on Acoustics, Speech, and Signal Processing, Vol. 2010, No. 3, Piscataway, US
ISSN:1520-6149
Publisher:IEEE Signal Processing Society
URL:http://www.fit.vutbr.cz/research/groups/speech/publi/2010/burget_icassp2010_4334.pdf [PDF]
Keywords
Large vocabulary speech recognition, Subspace Gaussian mixture model, Multilingual acoustic modeling
Annotation
This paper is on a different approach to multilingual speech recognition, in which the phone sets are entirely distinct but the model has parameters not tied to specific states that are shared across languages.
Abstract
Although research has previously been done on multilingual speech recognition, it has been found to be very difficult to improve over separately trained systems. The usual approach has been to use some kind of "universal phone set" that covers multiple languages. We report experiments on a different approach to multilingual speech recognition, in which the phone sets are entirely distinct but the model has parameters not tied to specific states that are shared across languages. We use a model called a "Subspace Gaussian Mixture Model" where states' distributions are Gaussian Mixture Models with a common structure, constrained to lie in a subspace of the total parameter space. The parameters that define this subspace can be shared across languages. We obtain substantial WER improvements with this approach, especially with very small amounts of inlanguage training data.
BibTeX:
@INPROCEEDINGS{
   author = {Luk{\'{a}}{\v{s}} Burget and Petr Schwarz and Mohit Agarwal
	and Pinar Akyazi and Kai Feng and Arnab Ghoshal and
	Ond{\v{r}}ej Glembek and K. Nagendra Goel and Martin
	Karafi{\'{a}}t and Daniel Povey and Ariya Rastrow and
	Richard Rose and Samuel Thomas},
   title = {Multilingual acoustic modeling for speech recognition based
	on Subspace Gaussian Mixture Models},
   pages = {4334--4337},
   booktitle = {Proc. International Conference on Acoustictics, Speech, and
	Signal Processing},
   journal = {Proc. International Conference on Acoustics, Speech, and
	Signal Processing},
   volume = {2010},
   number = {3},
   year = {2010},
   location = {Dallas, US},
   publisher = {IEEE Signal Processing Society},
   ISBN = {978-1-4244-4296-6},
   ISSN = {1520-6149},
   language = {english},
   url = {http://www.fit.vutbr.cz/research/view_pub.php?id=9307}
}

Your IPv4 address: 54.156.92.243
Switch to IPv6 connection

DNSSEC [dnssec]