Článek ve sborníku konference

BENEŠ Karel, BASKAR Murali K. a BURGET Lukáš. Residual Memory Networks in Language Modeling: Improving the Reputation of Feed-Forward Networks. In: Proceedings of Interspeeech 2017. Stockholm: International Speech Communication Association, 2017, s. 284-288. ISSN 1990-9772. Dostupné z: http://www.isca-speech.org/archive/Interspeech_2017/pdfs/1442.PDF
Jazyk publikace:angličtina
Název publikace:Residual Memory Networks in Language Modeling: Improving the Reputation of Feed-Forward Networks
Název (cs):Sítě s reziduální pamětí pro jazykové modelování: zlepšení reputace dopředných sítí
Strany:284-288
Sborník:Proceedings of Interspeeech 2017
Konference:Interspeech 2017
Místo vydání:Stockholm, SE
Rok:2017
URL:http://www.isca-speech.org/archive/Interspeech_2017/pdfs/1442.PDF
Časopis:Proceedings of Interspeech, roč. 2017, č. 08, FR
ISSN:1990-9772
DOI:10.21437/Interspeech.2017-1442
Vydavatel:International Speech Communication Association
URL:http://www.fit.vutbr.cz/research/groups/speech/publi/2017/benes_interspeech2017_IS171442.pdf [PDF]
Klíčová slova
residual memory networks, feed-forward networks, language modeling
Anotace
Článek pojednává o sítích s reziduální pamětí pro jazykové modelování: zlepšení reputace dopředných sítí.
Abstrakt
We introduce the Residual Memory Network (RMN) architecture to language modeling. RMN is an architecture of feedforward neural networks that incorporates residual connections and time-delay connections that allow us to naturally incorporate information from a substantial time context. As this is the first time RMNs are applied for language modeling, we thoroughly investigate their behaviour on the well studied Penn Treebank corpus. We change the model slightly for the needs of language modeling, reducing both its time and memory consumption. Our results show that RMN is a suitable choice for small-sized neural language models: With test perplexity 112.7 and as few as 2.3M parameters, they out-perform both a much larger vanilla RNN (PPL 124, 8M parameters) and a similarly sized LSTM (PPL 115, 2.08M parameters), while being only by less than 3 perplexity points worse than twice as big LSTM.
BibTeX:
@INPROCEEDINGS{
   author = {Karel Bene{\v{s}} and K. Murali Baskar and
	Luk{\'{a}}{\v{s}} Burget},
   title = {Residual Memory Networks in Language Modeling:
	Improving the Reputation of Feed-Forward Networks},
   pages = {284--288},
   booktitle = {Proceedings of Interspeeech 2017},
   journal = {Proceedings of Interspeech},
   volume = {2017},
   number = {08},
   year = {2017},
   location = {Stockholm, SE},
   publisher = {International Speech Communication Association},
   ISSN = {1990-9772},
   doi = {10.21437/Interspeech.2017-1442},
   language = {english},
   url = {http://www.fit.vutbr.cz/research/view_pub.php.cs?id=11578}
}

Vaše IPv4 adresa: 34.229.151.87