1 code implementation • 14 Mar 2024 • Akhil Kedia, Mohd Abbas Zaidi, Sushil Khyalia, Jungho Jung, Harshith Goka, Haejun Lee
In spite of their huge success, transformer models remain difficult to scale in depth.
no code implementations • 15 Jun 2023 • Björn Bebensee, Haejun Lee
We demonstrate the effectiveness of our model on the Schema-Guided Dialogue (SGD) and MultiWOZ datasets.
no code implementations • 18 Nov 2022 • Akhil Kedia, Mohd Abbas Zaidi, Haejun Lee
Using our proposed method, we outperform the current state-of-the-art method by $2. 5$ Exact Match score on the Natural Question dataset while using only $25\%$ of parameters and $35\%$ of the latency during inference, and $4. 4$ Exact Match on WebQuestions dataset.
Ranked #1 on Question Answering on WebQuestions (using extra training data)
no code implementations • 14 Dec 2021 • Haejun Lee, Akhil Kedia, Jongwon Lee, Ashwin Paranjape, Christopher D. Manning, Kyoung-Gu Woo
Recent approaches to Open-domain Question Answering refer to an external knowledge base using a retriever model, optionally rerank passages with a separate reranker model and generate an answer using another reader model.
no code implementations • 1 Jan 2021 • Seohyun Back, Akhil Kedia, Sai Chetan Chinthakindi, Haejun Lee, Jaegul Choo
We evaluate our method against existing ones in terms of the quality of generated questions as well as the fine-tuned MRC model accuracy after training on the data synthetically generated by our method.
Ranked #3 on Question Generation on SQuAD1.1 (using extra training data)
no code implementations • EMNLP 2020 • Haejun Lee, Drew A. Hudson, Kangwook Lee, Christopher D. Manning
We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation in a fully self-supervised manner.
1 code implementation • EMNLP 2021 • Peng Qi, Haejun Lee, Oghenetegiri "TG" Sido, Christopher D. Manning
We develop a unified system to answer directly from text open-domain questions that may require a varying number of retrieval steps.
Ranked #9 on Question Answering on HotpotQA
no code implementations • ICLR 2020 • Seohyun Back, Sai Chetan Chinthakindi, Akhil Kedia, Haejun Lee, Jaegul Choo
Real-world question answering systems often retrieve potentially relevant documents to a given question through a keyword search, followed by a machine reading comprehension (MRC) step to find the exact answer from them.
no code implementations • 25 Sep 2019 • Akhil Kedia, Sai Chetan Chinthakindi, Seohyun Back, Haejun Lee, Jaegul Choo
We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data.
1 code implementation • COLING 2018 • Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim
Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation.
no code implementations • WS 2018 • Seunghak Yu, Sathish Reddy Indurthi, Seohyun Back, Haejun Lee
Reading Comprehension (RC) of text is one of the fundamental tasks in natural language processing.
Ranked #69 on Question Answering on SQuAD1.1
no code implementations • WS 2017 • Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim
Language models for agglutinative languages have always been hindered in past due to myriad of agglutinations possible to any given word through various affixes.
1 code implementation • 6 Jul 2017 • Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim
Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation.