no code implementations • ACL (RepL4NLP) 2021 • Pritish Sahu, Michael Cogswell, Ajay Divakaran, Sara Rutherford-Quach
Current pre-trained language models have lots of knowledge, but a more limited ability to use that knowledge.
no code implementations • 29 Sep 2022 • Pritish Sahu, Michael Cogswell, Yunye Gong, Ajay Divakaran
The success of Large Language Models (LLMs) indicates they are increasingly able to answer queries like these accurately, but that ability does not necessarily imply a general understanding of concepts relevant to the anchor query.
no code implementations • 18 Jun 2022 • Pritish Sahu, Kalliopi Basioti, Vladimir Pavlovic
We present a novel computational model, "SAViR-T", for the family of visual reasoning problems embodied in the Raven's Progressive Matrices (RPM).
no code implementations • 22 Oct 2021 • Pritish Sahu, Karan Sikka, Ajay Divakaran
We also observe a drop in performance across all the models when testing on RecipeQA and proposed Meta-RecipeQA (e. g. 83. 6% versus 67. 1% for HTRN), which shows that the proposed dataset is relatively less biased.
no code implementations • 27 Sep 2021 • Pritish Sahu, Kalliopi Basioti, Vladimir Pavlovic
Computational learning approaches to solving visual reasoning tests, such as Raven's Progressive Matrices (RPM), critically depend on the ability to identify the visual concepts used in the test (i. e., the representation) as well as the latent rules based on those concepts (i. e., the reasoning).
no code implementations • 8 Jun 2021 • Pritish Sahu, Michael Cogswell, Sara Rutherford-Quach, Ajay Divakaran
Current pre-trained language models have lots of knowledge, but a more limited ability to use that knowledge.
no code implementations • 20 Apr 2021 • Pritish Sahu, Karan Sikka, Ajay Divakaran
We then evaluate M3C using a textual cloze style question-answering task and highlight an inherent bias in the question answer generation method from [35] that enables a naive baseline to cheat by learning from only answer choices.
no code implementations • 21 Nov 2020 • Karan Sikka, Jihua Huang, Andrew Silberfarb, Prateeth Nayak, Luke Rohrer, Pritish Sahu, John Byrnes, Ajay Divakaran, Richard Rohwer
We improve zero-shot learning (ZSL) by incorporating common-sense knowledge in DNNs.
no code implementations • 26 Sep 2019 • Behnam Gholami, Pritish Sahu, Minyoung Kim, Vladimir Pavlovic
In this paper, we improve the performance of DA by introducing a discriminative discrepancy measure which takes advantage of auxiliary information available in the source and the target domains to better align the source and target distributions.
1 code implementation • ICCV 2019 • Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic
We propose a family of novel hierarchical Bayesian deep auto-encoder models capable of identifying disentangled factors of variability in data.
1 code implementation • CVPR 2019 • Minyoung Kim, Pritish Sahu, Behnam Gholami, Vladimir Pavlovic
The latter can be achieved by minimizing the maximum discrepancy of predictors (classifiers).
Ranked #3 on Synthetic-to-Real Translation on Syn2Real-C
1 code implementation • 5 Feb 2019 • Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic
We propose a novel VAE-based deep auto-encoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all meaningful sources of variation and their cardinality.
no code implementations • ICLR 2019 • Behnam Gholami, Pritish Sahu, Ognjen Rudovic, Konstantinos Bousmalis, Vladimir Pavlovic
Unsupervised domain adaptation (uDA) models focus on pairwise adaptation settings where there is a single, labeled, source and a single target domain.