no code implementations • NoDaLiDa 2021 • Antonia Karamolegkou, Sara Stymne
However, when parsing Latin, it has been suggested that languages such as ancient Greek could be helpful.
1 code implementation • EMNLP (ArgMining) 2021 • Aris Fergadis, Dimitris Pappas, Antonia Karamolegkou, Haris Papageorgiou
We also present a set of strong, BERT-based neural baselines achieving an f1-score of 70. 0 for Claim and 62. 4 for Evidence identification evaluated with 10-fold cross-validation.
no code implementations • 8 Feb 2024 • Yong Cao, Wenyan Li, Jiaang Li, Yifei Yuan, Antonia Karamolegkou, Daniel Hershcovich
Pretrained large Vision-Language models have drawn considerable interest in recent years due to their remarkable performance.
no code implementations • 26 Oct 2023 • Yong Cao, Yova Kementchedjhieva, Ruixiang Cui, Antonia Karamolegkou, Li Zhou, Megan Dare, Lucia Donatelli, Daniel Hershcovich
We introduce a new task involving the translation and cultural adaptation of recipes between Chinese and English-speaking cuisines.
1 code implementation • 20 Oct 2023 • Antonia Karamolegkou, Jiaang Li, Li Zhou, Anders Søgaard
Language models may memorize more than just facts, including entire chunks of texts seen during training.
1 code implementation • 10 Oct 2023 • Li Zhou, Antonia Karamolegkou, Wenyu Chen, Daniel Hershcovich
The increasing ubiquity of language technology necessitates a shift towards considering cultural diversity in the machine learning realm, particularly for subjective tasks that rely heavily on cultural nuances, such as Offensive Language Detection (OLD).
no code implementations • 8 Jun 2023 • Antonia Karamolegkou, Mostafa Abdou, Anders Søgaard
Over the years, many researchers have seemingly made the same observation: Brain and language model activations exhibit some structural similarities, enabling linear partial mappings between features extracted from neural recordings and computational language models.
1 code implementation • 2 Jun 2023 • Jiaang Li, Antonia Karamolegkou, Yova Kementchedjhieva, Mostafa Abdou, Sune Lehmann, Anders Søgaard
Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activation during listening or reading, from which we can extract similar representations of words and phrases.