no code implementations • 27 Mar 2024 • Ali Mahboub, Muhy Eddin Za'ter, Bashar Alfrou, Yazan Estaitia, Adnan Jaljuli, Asma Hakouz
The latest advancements in machine learning and deep learning have brought forth the concept of semantic similarity, which has proven immensely beneficial in multiple applications and has largely replaced keyword search.
no code implementations • 6 Nov 2022 • Muhy Eddin Za'ter
This approach employs an algorithm for training a neural network using the inputs and outputs (currents and voltages) of a Buck converter.
no code implementations • 7 Apr 2022 • Hala Al Masri, Muhy Eddin Za'ter
The purpose of this work is to offer light on how ground-truth utterances may influence the evolution of speech systems in terms of naturalness, intelligibility, and understanding.
1 code implementation • 1 Mar 2022 • Muhy Eddin Za'ter, Sandy Yacoub Miguel, Majd Ghazi Batarseh
The increasing electricity demand and the need for clean and renewable energy resources to satisfy this demand in a cost-effective manner, imposes new challenges on researchers and developers to maximize the output of these renewable resources at all times.
no code implementations • 11 Feb 2022 • Muhy Eddin Za'ter, Bashar Talafha
The results showed that the use of multi-task learning and pre-trained word embeddings noticeably enhanced the quality of image captioning, however the presented results shows that Arabic captioning still lags behind when compared to the English language.
no code implementations • 3 Aug 2021 • Bashar Talafha, Muhy Eddin Za'ter, Samer Suleiman, Mahmoud Al-Ayyoub, Mohammed N. Al-Kabi
The role of predicting sarcasm in the text is known as automatic sarcasm detection.
no code implementations • 13 Dec 2020 • Wael Farhan, Muhy Eddin Za'ter, Qusai Abu Obaidah, Hisham al Bataineh, Zyad Sober, Hussein T. Al-Natsheh
LSTM and CNN networks were implemented using raw features: MFCC and MEL, where FCNN was explored on the pre-trained vectors while varying the hyper-parameters of these networks to obtain the best results for each dataset and task.
1 code implementation • COLING (WANLP) 2020 • Bashar Talafha, Mohammad Ali, Muhy Eddin Za'ter, Haitham Seelawi, Ibraheem Tuffaha, Mostafa Samir, Wael Farhan, Hussein T. Al-Natsheh
Our winning solution itself came in the form of an ensemble of different training iterations of our pre-trained BERT model, which achieved a micro-averaged F1-score of 26. 78% on the subtask at hand.