no code implementations • 13 Feb 2024 • Freddy Heppell, Mehmet E. Bakir, Kalina Bontcheva
As Large Language Models (LLMs) become more proficient, their misuse in large-scale viral disinformation campaigns is a growing concern.
1 code implementation • 21 Oct 2023 • Freddy Heppell, Kalina Bontcheva, Carolina Scarton
This paper analyses two hitherto unstudied sites sharing state-backed disinformation, Reliable Recent News (rrn. world) and WarOnFakes (waronfakes. com), which publish content in Arabic, Chinese, English, French, German, and Spanish.
1 code implementation • 14 Aug 2023 • Olesya Razuvayevskaya, Ben Wu, Joao A. Leite, Freddy Heppell, Ivan Srba, Carolina Scarton, Kalina Bontcheva, Xingyi Song
Adapters and Low-Rank Adaptation (LoRA) are parameter-efficient fine-tuning techniques designed to make the training of language models more efficient.
no code implementations • 10 Apr 2023 • Yida Mu, Ye Jiang, Freddy Heppell, Iknoor Singh, Carolina Scarton, Kalina Bontcheva, Xingyi Song
This motivated us to carry out a comparative study of the characteristics of COVID-19 misinformation versus those of accurate COVID-19 information through a large-scale computational analysis of over 242 million tweets.
1 code implementation • 16 Mar 2023 • Ben Wu, Olesya Razuvayevskaya, Freddy Heppell, João A. Leite, Carolina Scarton, Kalina Bontcheva, Xingyi Song
For Subtask 2 (Framing), we achieved first place in 3 languages, and the best average rank across all the languages, by using two separate ensembles: a monolingual RoBERTa-MUPPETLARGE and an ensemble of XLM-RoBERTaLARGE with adapters and task adaptive pretraining.