1 code implementation • EACL (WANLP) 2021 • Haitham Seelawi, Ibraheem Tuffaha, Mahmoud Gzawi, Wael Farhan, Bashar Talafha, Riham Badawi, Zyad Sober, Oday Al-Dweik, Abed Alhakim Freihat, Hussein Al-Natsheh
The emergence of Multi-task learning (MTL)models in recent years has helped push thestate of the art in Natural Language Un-derstanding (NLU).
1 code implementation • COLING (WANLP) 2020 • Bashar Talafha, Mohammad Ali, Muhy Eddin Za'ter, Haitham Seelawi, Ibraheem Tuffaha, Mostafa Samir, Wael Farhan, Hussein T. Al-Natsheh
Our winning solution itself came in the form of an ensemble of different training iterations of our pre-trained BERT model, which achieved a micro-averaged F1-score of 26. 78% on the subtask at hand.
1 code implementation • NSURL 2019 • Ali Fadel, Ibraheem Tuffaha, Mahmoud Al-Ayyoub
In this paper, we describe our team's effort on the semantic text question similarity task of NSURL 2019.
Ranked #2 on Question Similarity on Q2Q Arabic Benchmark
2 code implementations • WS 2019 • Ali Fadel, Ibraheem Tuffaha, Bara' Al-Jawarneh, Mahmoud Al-Ayyoub
In this work, we present several deep learning models for the automatic diacritization of Arabic text.
Ranked #2 on Arabic Text Diacritization on Tashkeela (using extra training data)
no code implementations • WS 2019 • Ali Fadel, Ibraheem Tuffaha, Mahmoud Al-Ayyoub
In this paper, we describe our team{'}s effort on the fine-grained propaganda detection on sentence level classification (SLC) task of NLP4IF 2019 workshop co-located with the EMNLP-IJCNLP 2019 conference.
2 code implementations • 25 Apr 2019 • Ali Fadel, Ibraheem Tuffaha, Bara' Al-Jawarneh, Mahmoud Al-Ayyoub
After constructing the dataset, existing tools and systems are tested on it.
Ranked #6 on Arabic Text Diacritization on Tashkeela