RTE
27 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in RTE
Most implemented papers
Finetuned Language Models Are Zero-Shot Learners
We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks.
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Hence, we also contribute a new, large Swedish bias-labelled dataset (of 2 million samples), translated from the English version and train the SotA mT5 model on it.
Representing Meaning with a Combination of Logical and Distributional Models
In this paper, we focus on the three components of a practical system integrating logical and distributional models: 1) Parsing and task representation is the logic-based part where input problems are represented in probabilistic logic.
Reset-free Trial-and-Error Learning for Robot Damage Recovery
However, the best RL algorithms for robotics require the robot and the environment to be reset to an initial state after each episode, that is, the robot is not learning autonomously.
Acquisition of Phrase Correspondences using Natural Deduction Proofs
How to identify, extract, and use phrasal knowledge is a crucial problem for the task of Recognizing Textual Entailment (RTE).
End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions
This work deals with SciTail, a natural entailment challenge derived from a multi-choice question answering problem.
Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference
In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data.
Adaptive Prior Selection for Repertoire-based Online Adaptation in Robotics
Repertoire-based learning is a data-efficient adaptation approach based on a two-step process in which (1) a large and diverse set of policies is learned in simulation, and (2) a planning or learning algorithm chooses the most appropriate policies according to the current situation (e. g., a damaged robot, a new object, etc.).
Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference
We observe that people usually use some discourse markers such as "so" or "but" to represent the logical relationship between two sentences.
Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking
We show on an entity linking benchmark that (i) this model improves the entity representations over plain BERT, (ii) that it outperforms entity linking architectures that optimize the tasks separately and (iii) that it only comes second to the current state-of-the-art that does mention detection and entity disambiguation jointly.