no code implementations • 23 Jun 2023 • Ohad Rubin, Jonathan Berant
We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM.
no code implementations • 25 May 2022 • Samuel Joseph Amouyal, Tomer Wolfson, Ohad Rubin, Ori Yoran, Jonathan Herzig, Jonathan Berant
Our results highlight the need for developing ODQA models that handle a broad range of question types, including single and multi-answer questions.
2 code implementations • NAACL 2022 • Ohad Rubin, Jonathan Herzig, Jonathan Berant
In-context learning is a recent paradigm in natural language understanding, where a large pre-trained language model (LM) observes a test instance and a few training examples as its input, and directly decodes the output without any update to its parameters.
1 code implementation • ACL (spnlp) 2021 • Ohad Rubin, Jonathan Berant
We apply SmBoP on Spider, a challenging zero-shot semantic parsing benchmark, and show that SmBoP leads to a 2. 2x speed-up in decoding time and a $\sim$5x speed-up in training time, compared to a semantic parser that uses autoregressive decoding.