Neural Representations in multi-modal and multi-lingual modeling

Czech title:Neuronové reprezentace v multimodálním a mnohojazyčném modelování
Research leader:Burget Lukáš
Team leaders:Karafiát Martin, Veselý Karel
Team members:Baskar Murali K., Beneš Karel
Agency:Czech Science Foundation
Code:GX19-26934X
Start:2019-01-01
End:2023-12-31
Keywords:deep learning;machine learning;neural networks;continuous representations;natural language processing;speech and text processing;machine translation;multi-modality;multi-linguality
Annotation:
The NEUREM3 project encompasses basic research in speech processing (SP) and natural language processing (NLP) with accent on multi-linguality and multi-modality (speech and text processing with the support of visual information). Current deep machine learning methods are based on continuous vector representations that are created by the neural networks (NN) themselves during the training. Although empirically, the results of such NNs are often excellent, our knowledge and understanding of such representations is insufficient. NEUREM3 has an ambition to fill this gap and to study neural representations for speech and text units of different scopes (from phonemes and letters to whole spoken and written documents) and representations acquired both for isolated tasks and multi-task setups. NEUREM3 will also improve NN architectures and training techniques, so that they can be trained on incomplete or incoherent data.
Project description:
Goals of the project:
Systematic study of neural structures for speech and text modeling in multi-modal and multi-lingual settings.
Addressing hierarchy of neural representations, human interpretability, and training under realistic conditions of non-ideal and incoherent data.

Your IPv4 address: 54.211.135.32