no code implementations • 12 Apr 2022 • Tarik Arici, Kushal Kumar, Hayreddin Çeker, Anoop S V K K Saladi, Ismail Tutar
Our model architecture consists of two subnetworks for the two subtasks: a classifier to predict UoM type (or the question) and an extractor to extract the relevant quantities.
no code implementations • 24 Sep 2021 • Tarik Arici, Mehmet Saygin Seyfioglu, Tal Neiman, Yi Xu, Son Train, Trishul Chilimbi, Belinda Zeng, Ismail Tutar
Vision-and-Language Pre-training (VLP) improves model performance for downstream tasks that require image and text inputs.
no code implementations • 1 Jan 2021 • Tarik Arici, Hayreddin Ceker, Ismail Baha Tutar
Question-answering (QA) models aim to find an answer given a question and con- text.
no code implementations • 18 Nov 2016 • Tarik Arici, Asli Celikyilmaz
In this work, we use Restricted Boltzmann Machines (RBMs) as a higher-level associative memory and learn the probability distribution for the high-level features generated by D. The associative memory samples its underlying probability distribution and G learns how to map these samples to data space.