no code implementations • JEP/TALN/RECITAL 2021 • Gaël Guibon, Matthieu Labeau, Hélène Flamein, Luce Lefeuvre, Chloé Clavel
Dans cet article nous reproduisons un scénario d’apprentissage selon lequel les données cibles ne sont pas accessibles et seules des données connexes le sont.
1 code implementation • ACL (MetaNLP) 2021 • Gaël Guibon, Matthieu Labeau, Hélène Flamein, Luce Lefeuvre, Chloé Clavel
In this paper, we place ourselves in a classification scenario in which the target classes and data type are not accessible during training.
no code implementations • JEP/TALN/RECITAL 2022 • Vanessa Gaudray Bouju, Margot Guettier, Gwennola Lerus, Gaël Guibon, Matthieu Labeau, Luce Lefeuvre
Cet article présente l’approche de l’équipe TGV lors de sa participation à la tâche de base de DEFT 2022, dont l’objectif était de prédire automatiquement les notes obtenues par des étudiants sur la base de leurs réponses à des questionnaires.
no code implementations • EMNLP 2021 • Pierre Colombo, Emile Chapuis, Matthieu Labeau, Chloé Clavel
Spoken dialogue systems need to be able to handle both multiple languages and multilinguality inside a conversation (e. g in case of code-switching).
1 code implementation • COLING 2022 • Aina Garí Soler, Matthieu Labeau, Chloé Clavel
The way we use words is influenced by our opinion.
no code implementations • LREC 2022 • Gaël Guibon, Luce Lefeuvre, Matthieu Labeau, Chloé Clavel
We also present our first usage of EZCAT along with our annotation schema we used to annotate confidential customer service conversations.
1 code implementation • LREC 2022 • Aina Garí Soler, Matthieu Labeau, Chloé Clavel
Our discourses are full of potential lexical ambiguities, due in part to the pervasive use of words having multiple senses.
no code implementations • IWSLT 2016 • Franck Burlot, Matthieu Labeau, Elena Knyazeva, Thomas Lavergne, Alexandre Allauzen, François Yvon
This paper describes LIMSI’s submission to the MT track of IWSLT 2016.
1 code implementation • 22 Feb 2024 • Aina Garí Soler, Matthieu Labeau, Chloé Clavel
When deriving contextualized word representations from language models, a decision needs to be made on how to obtain one for out-of-vocabulary (OOV) words that are segmented into subwords.
no code implementations • 19 Feb 2024 • Paul Krzakala, Junjie Yang, Rémi Flamary, Florence d'Alché-Buc, Charlotte Laclau, Matthieu Labeau
We present a novel end-to-end deep learning-based approach for Supervised Graph Prediction (SGP).
no code implementations • 28 Sep 2023 • Junjie Yang, Matthieu Labeau, Florence d'Alché-Buc
Pairwise comparison of graphs is key to many applications in Machine learning ranging from clustering, kernel-based classification/regression and more recently supervised graph prediction.
no code implementations • 31 Mar 2022 • Chloé Clavel, Matthieu Labeau, Justine Cassell
In this paper we survey these neural architectures and what they have been applied to.
1 code implementation • EMNLP 2021 • Gaël Guibon, Matthieu Labeau, Hélène Flamein, Luce Lefeuvre, Chloé Clavel
We test this method on two datasets with different languages: daily conversations in English and customer service chat conversations in French.
Emotion Classification Emotion Recognition in Conversation +1
no code implementations • EMNLP 2021 • Pierre Colombo, Emile Chapuis, Matthieu Labeau, Chloe Clavel
We demonstrate that our new penalties lead to a consistent improvement (up to $4. 3$ on accuracy) across a large variety of state-of-the-art models on two well-known sentiment analysis datasets: \texttt{CMU-MOSI} and \texttt{CMU-MOSEI}.
no code implementations • 27 Aug 2021 • Emile Chapuis, Pierre Colombo, Matthieu Labeau, Chloe Clavel
Spoken dialog systems need to be able to handle both multiple languages and multilinguality inside a conversation (\textit{e. g} in case of code-switching).
no code implementations • EMNLP 2020 • Tanvi Dinkar, Pierre Colombo, Matthieu Labeau, Chloé Clavel
While being an essential component of spoken language, fillers (e. g."um" or "uh") often remain overlooked in Spoken Language Understanding (SLU) tasks.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Emile Chapuis, Pierre Colombo, Matteo Manica, Matthieu Labeau, Chloe Clavel
We obtain our representations with a hierarchical encoder based on transformer architectures, for which we extend two well-known pre-training objectives.
Ranked #1 on Text Classification on SILICONE Benchmark
Dialogue Act Classification Emotion Recognition in Conversation +1
1 code implementation • ICLR 2020 • Yi Ren, Shangmin Guo, Matthieu Labeau, Shay B. Cohen, Simon Kirby
The principle of compositionality, which enables natural language to represent complex concepts via a structured combination of simpler ones, allows us to convey an open-ended set of messages using a limited vocabulary.
no code implementations • IJCNLP 2019 • Matthieu Labeau, Shay B. Cohen
In this paper, we experiment with several families (alpha, beta and gamma) of power divergences, generalized from the KL divergence, for learning language models with an objective different than standard MLE.
no code implementations • COLING 2018 • Matthieu Labeau, Alex Allauzen, re
Noise-Contrastive Estimation (NCE) is a learning criterion that is regularly used to train neural language models in place of Maximum Likelihood Estimation, since it avoids the computational bottleneck caused by the output softmax.
no code implementations • JEPTALNRECITAL 2018 • Matthieu Labeau, Alex Allauzen, re
L{'}estimation contrastive bruit{\'e}e (NCE) et l{'}{\'e}chantillonage par importance (IS) sont des proc{\'e}dures d{'}entra{\^\i}nement bas{\'e}es sur l{'}{\'e}chantillonage, que l{'}on utilise habituellement {\`a} la place de l{'}estimation du maximum de vraisemblance (MLE) pour {\'e}viter le calcul du softmax lorsque l{'}on entra{\^\i}ne des mod{\`e}les de langue neuronaux.
no code implementations • WS 2017 • Matthieu Labeau, Alex Allauzen, re
Most of neural language models use different kinds of embeddings for word prediction.
no code implementations • JEPTALNRECITAL 2017 • {\'E}l{\'e}onor Bartenlian, Margot Lacour, Matthieu Labeau, Alex Allauzen, re, Guillaume Wisniewski, Fran{\c{c}}ois Yvon
Ce travail cherche {\`a} comprendre pourquoi les performances d{'}un analyseur morpho-syntaxiques chutent fortement lorsque celui-ci est utilis{\'e} sur des donn{\'e}es hors domaine.
no code implementations • JEPTALNRECITAL 2017 • Matthieu Labeau, Alex Allauzen, re
Les repr{\'e}sentations continues des mots sont calcul{\'e}es {\`a} la vol{\'e}e {\`a} partir des caract{\`e}res les composant, gr{\`a}ce {\`a} une couche convolutionnelle suivie d{'}une couche de regroupement (pooling).
no code implementations • EACL 2017 • Matthieu Labeau, Alex Allauzen, re
Noise Contrastive Estimation (NCE) is a learning procedure that is regularly used to train neural language models, since it avoids the computational bottleneck caused by the output softmax.