no code implementations • 26 Apr 2024 • Robin Schmucker, Meng Xia, Amos Azaria, Tom Mitchell
Even though Ruffle&Riley users require more time to complete the activity, we did not find significant differences in short-term learning gains over the reading activity.
1 code implementation • 17 Apr 2024 • Yue Wu, Yewen Fan, So Yeon Min, Shrimai Prabhumoye, Stephen Mcaleer, Yonatan Bisk, Ruslan Salakhutdinov, Yuanzhi Li, Tom Mitchell
The chains of nodes can be designed to explicitly enforce a naturally structured "thought process".
no code implementations • 2 Feb 2024 • Pouya Pezeshkpour, Eser Kandogan, Nikita Bhutani, Sajjadur Rahman, Tom Mitchell, Estevam Hruschka
We present a formal definition of reasoning capacity and illustrate its utility in identifying limitations within each component of the system.
no code implementations • 26 Sep 2023 • Robin Schmucker, Meng Xia, Amos Azaria, Tom Mitchell
Conversational tutoring systems (CTSs) offer learning experiences driven by natural language interaction.
1 code implementation • 24 May 2023 • Yue Wu, Shrimai Prabhumoye, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom Mitchell, Yuanzhi Li
Finally, we show the potential of games as a test bed for LLMs.
no code implementations • 3 May 2023 • Yue Wu, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Yuanzhi Li, Tom Mitchell, Shrimai Prabhumoye
We propose the Plan, Eliminate, and Track (PET) framework.
1 code implementation • 26 Apr 2023 • Amos Azaria, Tom Mitchell
While Large Language Models (LLMs) have shown exceptional performance in various tasks, one of their most prominent drawbacks is generating inaccurate or false information with a confident tone.
1 code implementation • 21 Dec 2022 • Bosung Kim, Hayate Iso, Nikita Bhutani, Estevam Hruschka, Ndapa Nakashole, Tom Mitchell
We propose a novel framework, ZETT (ZEro-shot Triplet extraction by Template infilling), that aligns the task objective to the pre-training objective of generative transformers to generalize to unseen relations.
Ranked #1 on Zero-shot Relation Triplet Extraction on FewRel
no code implementations • EMNLP 2021 • Forough Arabshahi, Jennifer Lee, Antoine Bosselut, Yejin Choi, Tom Mitchell
Our reasoner uses a state-of-the-art transformer-based generative commonsense knowledge base (KB) as its source of background knowledge for reasoning.
1 code implementation • ACL 2020 • Toby Jia-Jun Li, Tom Mitchell, Brad Myers
We show SUGILITE, an intelligent task automation agent that can learn new tasks and relevant associated concepts interactively from the user{'}s natural language instructions and demonstrations, using the graphical user interfaces (GUIs) of third-party mobile apps.
1 code implementation • 17 Jun 2020 • Forough Arabshahi, Jennifer Lee, Mikayla Gawarecki, Kathryn Mazaitis, Amos Azaria, Tom Mitchell
More precisely, we consider the problem of identifying the unstated presumptions of the speaker that allow the requested action to achieve the desired goal from the given state (perhaps elaborated by making the implicit presumptions explicit).
no code implementations • 7 Apr 2020 • Emmanouil Antonios Platanios, Maruan Al-Shedivat, Eric Xing, Tom Mitchell
Many machine learning systems today are trained on large amounts of human-annotated data.
3 code implementations • ICLR 2020 • Emmanouil Antonios Platanios, Abulhair Saparov, Tom Mitchell
Never-ending learning is a machine learning paradigm that aims to bridge this gap, with the goal of encouraging researchers to design machine learning systems that can learn to perform a wider variety of inter-related tasks in more complex environments.
no code implementations • NeurIPS 2019 • Fan Yang, Liu Leqi, Yifan Wu, Zachary C. Lipton, Pradeep Ravikumar, William W. Cohen, Tom Mitchell
The ability to inferring latent psychological traits from human behavior is key to developing personalized human-interacting machine learning systems.
no code implementations • IJCNLP 2019 • Shashank Srivastava, Igor Labutov, Tom Mitchell
Natural language has recently been explored as a new medium of supervision for training machine learning models.
2 code implementations • NeurIPS 2019 • Zhiting Hu, Bowen Tan, Ruslan Salakhutdinov, Tom Mitchell, Eric P. Xing
In this work, we propose a new method that supports learning different manipulation schemes with the same gradient-based algorithm.
1 code implementation • IJCNLP 2019 • Zhichu Lu, Forough Arabshahi, Igor Labutov, Tom Mitchell
In this paper, we propose a semantic parser that generalizes to out-of-domain examples by learning a general strategy for parsing an unseen utterance through adapting the logical forms of seen utterances, instead of learning to generate a logical form from scratch.
no code implementations • 25 Sep 2019 • Emmanouil Antonios Platanios, Maruan Al-Shedivat, Eric Xing, Tom Mitchell
Many machine learning systems today are trained on large amounts of human-annotated data.
1 code implementation • ACL 2019 • Sharmistha Jat, Hao Tang, Partha Talukdar, Tom Mitchell
To the best of our knowledge, this is the first work showing that the MEG brain recording when reading a word in a sentence can be used to distinguish earlier words in the sentence.
no code implementations • NAACL 2019 • Dan Schwartz, Tom Mitchell
This new approach to analysis shows for the first time that all of the ERPs are predictable from embeddings of a stream of language.
no code implementations • ACL 2017 • Bishan Yang, Tom Mitchell
This paper focuses on how to take advantage of external knowledge bases (KBs) to improve recurrent neural networks for machine reading.
no code implementations • EMNLP 2018 • Igor Labutov, Bishan Yang, Tom Mitchell
As humans, we often rely on language to learn language.
no code implementations • EMNLP 2018 • Igor Labutov, Shashank Srivastava, Tom Mitchell
We present LIA, an intelligent personal assistant that can be programmed using natural language.
1 code implementation • EMNLP 2018 • Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, Tom Mitchell
We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation.
no code implementations • ACL 2018 • Shashank Srivastava, Igor Labutov, Tom Mitchell
Humans can efficiently learn new concepts using language.
no code implementations • EMNLP 2017 • Shashank Srivastava, Igor Labutov, Tom Mitchell
Natural language constitutes a predominant medium for much of human learning and pedagogy.
no code implementations • EMNLP 2017 • Bishan Yang, Tom Mitchell
We introduce a new method for frame-semantic parsing that significantly improves the prior state of the art.
1 code implementation • NAACL 2016 • Bishan Yang, Tom Mitchell
Events and entities are closely related; entities are often actors or participants in events and events without entities are uncommon.
no code implementations • 1 Dec 2015 • Shashank Srivastava, Snigdha Chaturvedi, Tom Mitchell
In this work, we address the problem of inferring the polarity of relationships between people in narrative summaries.
no code implementations • CVPR 2015 • Xinlei Chen, Alan Ritter, Abhinav Gupta, Tom Mitchell
We present a co-clustering framework that can be used to discover multiple semantic and visual senses of a given Noun Phrase (NP).
no code implementations • 12 Apr 2014 • William Yang Wang, Kathryn Mazaitis, Ni Lao, Tom Mitchell, William W. Cohen
We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank (PPR) on a linearized version of the proof space, and using on this connection, we develop a proveably-correct approximate grounding scheme, based on the PageRank-Nibble algorithm.