Conference paper

DAS Amit, HASEGAWA-JOHNSON Mark and VESELÝ Karel. Deep Auto-encoder Based Multi-task Learning Using Probabilistic Transcriptions. In: Proceedings of Interspeech 2017. Stockholm: International Speech Communication Association, 2017, pp. 2073-2077. ISSN 1990-9772. Available from: http://www.isca-speech.org/archive/Interspeech_2017/pdfs/0582.PDF
Publication language:english
Original title:Deep Auto-encoder Based Multi-task Learning Using Probabilistic Transcriptions
Title (cs):Multi-task trénování s pravděpodobnostními přepisy založené na hlubokém autoenkodéru
Pages:2073-2077
Proceedings:Proceedings of Interspeech 2017
Conference:Interspeech 2017
Place:Stockholm, SE
Year:2017
URL:http://www.isca-speech.org/archive/Interspeech_2017/pdfs/0582.PDF
Journal:Proceedings of Interspeech, Vol. 2017, No. 08, FR
ISSN:1990-9772
DOI:10.21437/Interspeech.2017-582
Publisher:International Speech Communication Association
URL:http://www.fit.vutbr.cz/research/groups/speech/publi/2017/das_interspeech2017_IS170582.pdf [PDF]
Keywords
cross-lingual speech recognition, probabilistic transcription, deep neural networks, multi-task learning
Annotation
This article is about deep auto-encoder based Multi-task Learning using probabilistic transcriptions.
Abstract
We examine a scenario where we have no access to native transcribers in the target language. This is typical of language communities that are under-resourced. However, turkers (online crowd workers) available in online marketplaces can serve as valuable alternative resources for providing transcripts in the target language. We assume that the turkers neither speak nor have any familiarity with the target language. Thus, they are unable to distinguish all phone pairs in the target language; their transcripts therefore specify, at best, a probability distribution called a probabilistic transcript (PT). Standard deep neural network (DNN) training using PTs do not necessarily improve error rates. Previously reported results have demonstrated some success by adopting the multi-task learning (MTL) approach. In this study, we report further improvements by introducing a deep auto-encoder based MTL. This method leverages large amounts of untranscribed data in the target language in addition to the PTs obtained from turkers. Furthermore, to encourage transfer learning in the feature space, we also examine the effect of using monophones from transcripts in well-resourced languages. We report consistent improvement in phone error rates (PER) for Swahili, Amharic, Dinka, and Mandarin.
BibTeX:
@INPROCEEDINGS{
   author = {Amit Das and Mark Hasegawa-Johnson and Karel Vesel{\'{y}}},
   title = {Deep Auto-encoder Based Multi-task Learning Using
	Probabilistic Transcriptions},
   pages = {2073--2077},
   booktitle = {Proceedings of Interspeech 2017},
   journal = {Proceedings of Interspeech},
   volume = {2017},
   number = {08},
   year = {2017},
   location = {Stockholm, SE},
   publisher = {International Speech Communication Association},
   ISSN = {1990-9772},
   doi = {10.21437/Interspeech.2017-582},
   language = {english},
   url = {http://www.fit.vutbr.cz/research/view_pub.php?id=11585}
}

Your IPv4 address: 54.80.87.62
Switch to IPv6 connection

DNSSEC [dnssec]