Publication Details

Jointly Trained Transformers Models for Spoken Language Translation

VYDANA Hari K., KARAFIÁT Martin, ŽMOLÍKOVÁ Kateřina, BURGET Lukáš and ČERNOCKÝ Jan. Jointly Trained Transformers Models for Spoken Language Translation. In: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Ontario: IEEE Signal Processing Society, 2021, pp. 7513-7517. ISBN 978-1-7281-7605-5.
Czech title
Společně trénované modely založené na Transformerech pro automatický překlad mluvené řeči
Type
conference paper
Language
english
Authors
URL
Keywords

Spoken Language Translation, Transformers, Joint training, How2 dataset, Auxiliary loss, ASR objective, Coupled decoding, End-to-End differentiable pipeline.

Abstract

End-to-End and cascade (ASR-MT) spoken language translation (SLT) systems are reaching comparable performances, however, a large degradation is observed when translating the ASR hypothesis in comparison to using oracle input text. In this work, degradation in performance is reduced by creating an End-to-End differentiable pipeline between the ASR and MT systems. In this work, we train SLT systems with ASR objective as an auxiliary loss and both the networks are connected through the neural hidden representations. This training has an End-to-End differentiable path with respect to the final objective function and utilizes the ASR objective for better optimization. This architecture has improved the BLEU score from 41.21 to 44.69. Ensembling the proposed architecture with independently trained ASR and MT systems further improved the BLEU score from 44.69 to 46.9. All the experiments are reported on English-Portuguese speech translation task using the How2 corpus. The final BLEU score is on-par with the best speech translation system on How2 dataset without using any additional training data and language model and using fewer parameters.

Published
2021
Pages
7513-7517
Proceedings
ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Conference
2021 IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, CA
ISBN
978-1-7281-7605-5
Publisher
IEEE Signal Processing Society
Place
Toronto, Ontario, CA
DOI
UT WoS
000704288407158
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB12522,
   author = "K. Hari Vydana and Martin Karafi\'{a}t and Kate\v{r}ina \v{Z}mol\'{i}kov\'{a} and Luk\'{a}\v{s} Burget and Jan \v{C}ernock\'{y}",
   title = "Jointly Trained Transformers Models for Spoken Language Translation",
   pages = "7513--7517",
   booktitle = "ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
   year = 2021,
   location = "Toronto, Ontario, CA",
   publisher = "IEEE Signal Processing Society",
   ISBN = "978-1-7281-7605-5",
   doi = "10.1109/ICASSP39728.2021.9414159",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12522"
}
Back to top