GRÉZL František, KARAFIÁT Martin and ČERNOCKÝ Jan. Neural network topologies and bottle neck features in speech recognition. Brno, 2007.
Publication language:english
Original title:Neural network topologies and bottle neck features in speech recognition
Title (cs):Topologie neuronových sítí a bottle-neckové parametry v rozpoznávání řeči
Conference:Machine Learning and Multimodal Interaction
Place:Brno, CZ
neural networks, topologies, speech recognition, bottle-neck features
Different neural net topologies for estimating features for speech recognition were presented. We introduced bottle-neck structure into previously proposed Split Context. This was done mainly to reduce size of resulting neural net, which serves as feature estimator. When bottle-neck outputs are used also as final outputs from neural network instead of probability estimates, the reduction of word error rate is also reached.
This poster overviewthe newly proposed bottle-neck features and then examines the possibility of use of meural net structure with
bottle-neck in hierarchical neural net classifier such as Split
Context classifier.

First, the neural net with bottle-neck is used in place of merger to
see whether the advantage seeen for single neural net will hold also
for hierarchical classifier. Then we use the bottle-neck neural nets
in place of context classifiers, using bottle-neck outputs as input to
a merger classifier. Finally, bottle-neck neural nets are used in both
stages of Split Context classifier. This improved Split Context
structure gains several advantages: The use of bottle-neck imply
size reduction of resulting classifier. Also, processing of classifier
output is smaller compare to probabilistic features. The WER reduction was achieved too.

Your IPv4 address:
Switch to IPv6 connection

DNSSEC [dnssec]