no code implementations • EAMT 2022 • Ricardo Rei, Ana C Farinha, José G.C. de Souza, Pedro G. Ramos, André F.T. Martins, Luisa Coheur, Alon Lavie
In recent years, several neural fine-tuned machine translation evaluation metrics such as COMET and BLEURT have been proposed.
no code implementations • 16 Oct 2023 • Nuno M. Guerreiro, Ricardo Rei, Daan van Stigt, Luisa Coheur, Pierre Colombo, André F. T. Martins
Widely used learned metrics for machine translation evaluation, such as COMET and BLEURT, estimate the quality of a translation hypothesis by providing a single sentence-level score.
no code implementations • 21 Sep 2023 • Ricardo Rei, Nuno M. Guerreiro, José Pombal, Daan van Stigt, Marcos Treviso, Luisa Coheur, José G. C. de Souza, André F. T. Martins
Our team participated on all tasks: sentence- and word-level quality prediction (task 1) and fine-grained error span detection (task 2).
no code implementations • 8 Sep 2023 • Patrícia Pereira, Rui Ribeiro, Helena Moniz, Luisa Coheur, Joao Paulo Carvalho
Fuzzy Fingerprints have been successfully used as an interpretable text classification technique, but, like most other techniques, have been largely surpassed in performance by Large Pre-trained Language Models, such as BERT or RoBERTa.
no code implementations • 28 Jul 2023 • Rita Costa, Bruno Martins, Sérgio Viana, Luisa Coheur
State of the art models in intent induction require annotated datasets.
no code implementations • 12 Jul 2023 • Inês Lacerda, Hugo Nicolau, Luisa Coheur
Current signing avatars are often described as unnatural as they cannot accurately reproduce all the subtleties of synchronized body behaviors of a human signer.
1 code implementation • 19 May 2023 • Ricardo Rei, Nuno M. Guerreiro, Marcos Treviso, Luisa Coheur, Alon Lavie, André F. T. Martins
Neural metrics for machine translation evaluation, such as COMET, exhibit significant improvements in their correlation with human judgments, as compared to traditional metrics based on lexical overlap, such as BLEU.
no code implementations • 26 Apr 2023 • Hugo Rodrigues, Eric Nyberg, Luisa Coheur
Each generated question, after being corrected by the user, is used as a new seed in the next iteration, so more patterns are created each time.
1 code implementation • 13 Sep 2022 • Ricardo Rei, Marcos Treviso, Nuno M. Guerreiro, Chrysoula Zerva, Ana C. Farinha, Christine Maroti, José G. C. de Souza, Taisiya Glushkova, Duarte M. Alves, Alon Lavie, Luisa Coheur, André F. T. Martins
We present the joint contribution of IST and Unbabel to the WMT 2022 Shared Task on Quality Estimation (QE).
no code implementations • 24 Jul 2022 • Isabel Dias, Ricardo Rei, Patrícia Pereira, Luisa Coheur
In this paper, we propose an end-to-end sentiment-aware conversational agent based on two models: a reply sentiment prediction model, which leverages the context of the dialogue to predict an appropriate sentiment for the agent to express in its reply; and a text generation model, which is conditioned on the predicted sentiment and the context of the dialogue, to produce a reply that is both context and sentiment appropriate.
1 code implementation • 9 Mar 2022 • Vânia Mendonça, Ricardo Rei, Luisa Coheur, Alberto Sardinha
Moreover, since we not know in advance which query strategy will be the most adequate for a certain language pair and set of Machine Translation models, we propose to dynamically combine multiple strategies using prediction with expert advice.
no code implementations • ACL 2021 • Ricardo Rei, Ana C Farinha, Craig Stewart, Luisa Coheur, Alon Lavie
We present MT-Telescope, a visualization platform designed to facilitate comparative analysis of the output quality of two Machine Translation (MT) systems.
1 code implementation • ACL 2021 • Vânia Mendonça, Ricardo Rei, Luisa Coheur, Alberto Sardinha, Ana Lúcia Santos
In Machine Translation, assessing the quality of a large amount of automatic translations can be challenging.
no code implementations • LREC 2020 • Hugo Gon{\c{c}}alo Oliveira, Jo{\~a}o Ferreira, Jos{\'e} Santos, Pedro Fialho, Ricardo Rodrigues, Luisa Coheur, Ana Alves
Matching variations with their original questions was not trivial with a set of unsupervised baselines, especially for manually created variations.
no code implementations • 13 Jun 2016 • Pedro Mota, Maxine Eskenazi, Luisa Coheur
In this context, we study how different weighting mechanisms influence the discovery of word communities that relate to the different topics found in the documents.
no code implementations • 16 Jan 2014 • João V. Graça, Kuzman Ganchev, Luisa Coheur, Fernando Pereira, Ben Taskar
We consider the problem of fully unsupervised learning of grammatical (part-of-speech) categories from unlabeled text.