no code implementations • EMNLP (NLP-COVID19) 2020 • Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh
The ongoing pandemic has heightened the need for developing tools to flag COVID-19-related misinformation on the internet, specifically on social media such as Twitter.
1 code implementation • 20 Dec 2022 • Liang Ma, Shuyang Cao, Robert L. Logan IV, Di Lu, Shihao Ran, Ke Zhang, Joel Tetreault, Alejandro Jaimes
The proliferation of automatic faithfulness metrics for summarization has produced a need for benchmarks to evaluate them.
1 code implementation • 19 Oct 2022 • Zhaofeng Wu, Robert L. Logan IV, Pete Walsh, Akshita Bhagia, Dirk Groeneveld, Sameer Singh, Iz Beltagy
We demonstrate that a simple recipe, continued pretraining that incorporates a trainable prompt during multi-task learning, leads to improved promptability in both zero- and few-shot settings compared to existing methods, up to 31% relative.
no code implementations • 15 Feb 2022 • Yasaman Razeghi, Robert L. Logan IV, Matt Gardner, Sameer Singh
Pretrained Language Models (LMs) have demonstrated ability to perform numerical reasoning by extrapolating from a few examples in few-shot settings.
no code implementations • NAACL 2022 • Robert L. Logan IV, Alexandre Passos, Sameer Singh, Ming-Wei Chang
Textual knowledge bases such as Wikipedia require considerable effort to keep up to date and consistent.
2 code implementations • Findings (ACL) 2022 • Robert L. Logan IV, Ivana Balažević, Eric Wallace, Fabio Petroni, Sameer Singh, Sebastian Riedel
Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning.
3 code implementations • EMNLP 2020 • Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining.
no code implementations • EMNLP 2020 • Qiang Ning, Hao Wu, Pradeep Dasigi, Dheeru Dua, Matt Gardner, Robert L. Logan IV, Ana Marasovi{\'c}, Zhen Nie
High-quality and large-scale data are key to success for AI systems.
no code implementations • ACL 2020 • Robert L. Logan IV, Matt Gardner, Sameer Singh
In addition, we elucidate subtle differences in how importance sampling is applied in these works that can have substantial effects on the final estimates, as well as provide theoretical results which reinforce the validity of this technique.
1 code implementation • 16 Feb 2020 • Disi Ji, Robert L. Logan IV, Padhraic Smyth, Mark Steyvers
Recent advances in machine learning have led to increased deployment of black-box classifiers across a wide variety of applications.
1 code implementation • IJCNLP 2019 • Matthew E. Peters, Mark Neumann, Robert L. Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A. Smith
Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities.
Ranked #9 on Relation Classification on TACRED
1 code implementation • 17 Jun 2019 • Robert L. Logan IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, Sameer Singh
Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge.
no code implementations • NAACL 2019 • Jun Seok Kang, Robert L. Logan IV, Zewei Chu, Yang Chen, Dheeru Dua, Kevin Gimpel, Sameer Singh, Niranjan Balasubramanian
Given a sentence about a target entity, the task is to automatically generate a post-modifier phrase that provides contextually relevant information about the entity.
1 code implementation • 29 Nov 2017 • Robert L. Logan IV, Samuel Humeau, Sameer Singh
The broad goal of information extraction is to derive structured information from unstructured data.