no code implementations • IWSLT (EMNLP) 2018 • Evgeny Matusov, Patrick Wilken, Parnia Bahar, Julian Schamper, Pavel Golik, Albert Zeyer, Joan Albert Silvestre-Cerda, Adrià Martínez-Villaronga, Hendrik Pesch, Jan-Thorsten Peter
This work describes AppTek’s speech translation pipeline that includes strong state-of-the-art automatic speech recognition (ASR) and neural machine translation (NMT) components.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • EAMT 2022 • Mattia Di Gangi, Nick Rossenbach, Alejandro Pérez, Parnia Bahar, Eugen Beck, Patrick Wilken, Evgeny Matusov
The revoicing usually comes with a changed script, mostly in a different language, and the revoicing should reproduce the original emotions, coherent with the body language, and lip synchronized.
no code implementations • ACL (IWSLT) 2021 • Parnia Bahar, Patrick Wilken, Mattia A. Di Gangi, Evgeny Matusov
This paper describes the offline and simultaneous speech translation systems developed at AppTek for IWSLT 2021.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • IWSLT 2017 • Parnia Bahar, Jan Rosendahl, Nick Rossenbach, Hermann Ney
This work describes the Neural Machine Translation (NMT) system of the RWTH Aachen University developed for the English$German tracks of the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2017.
1 code implementation • 6 Jun 2023 • Parnia Bahar, Mattia Di Gangi, Nick Rossenbach, Mohammad Zeineldeen
Automatic Arabic diacritization is useful in many applications, ranging from reading support for language learners to accurate pronunciation predictor for downstream tasks like speech synthesis.
no code implementations • 24 Nov 2020 • Parnia Bahar, Christopher Brix, Hermann Ney
Neural translation models have proven to be effective in capturing sufficient information from a source sentence and generating a high-quality target sentence.
no code implementations • 24 Nov 2020 • Parnia Bahar, Tobias Bieschke, Ralf Schlüter, Hermann Ney
Direct speech translation is an alternative method to avoid error propagation; however, its performance is often behind the cascade system.
no code implementations • WS 2020 • Parnia Bahar, Patrick Wilken, Tamer Alkhouli, Andreas Guta, Pavel Golik, Evgeny Matusov, Christian Herold
AppTek and RWTH Aachen University team together to participate in the offline and simultaneous speech translation tracks of IWSLT 2020.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • ACL 2020 • Christopher Brix, Parnia Bahar, Hermann Ney
Sparse models require less memory for storage and enable a faster inference by reducing the necessary number of FLOPs.
no code implementations • 20 Nov 2019 • Parnia Bahar, Tobias Bieschke, Hermann Ney
Recent advances in deep learning show that end-to-end speech to text translation model is a promising approach to direct the speech translation field.
no code implementations • EMNLP (IWSLT) 2019 • Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney
This work investigates a simple data augmentation technique, SpecAugment, for end-to-end speech translation.
no code implementations • 20 Nov 2019 • Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney
Attention-based sequence-to-sequence models have shown promising results in automatic speech recognition.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • WS 2019 • Jan Rosendahl, Christian Herold, Yunsu Kim, Miguel Gra{\c{c}}a, Weiyue Wang, Parnia Bahar, Yingbo Gao, Hermann Ney
For the De-En task, none of the tested methods gave a significant improvement over last years winning system and we end up with the same performance, resulting in 39. 6{\%} BLEU on newstest2019.
1 code implementation • EMNLP 2018 • Parnia Bahar, Christopher Brix, Hermann Ney
This work investigates an alternative model for neural machine translation (NMT) and proposes a novel architecture, where we employ a multi-dimensional long short-term memory (MDLSTM) for translation modeling.
1 code implementation • WS 2018 • Julian Schamper, Jan Rosendahl, Parnia Bahar, Yunsu Kim, Arne Nix, Hermann Ney
In total we improve by 6. 8{\%} BLEU over our last year{'}s submission and by 4. 8{\%} BLEU over the winning system of the 2017 German→English task.