no code implementations • 16 May 2024 • Yuwei Wan, Aswathy Ajith, Yixuan Liu, Ke Lu, Clara Grazian, Bram Hoex, Wenjie Zhang, Chunyu Kit, Tong Xie, Ian Foster
The use of question-answer (QA) pairs for training and evaluating large language models (LLMs) has attracted considerable attention.
1 code implementation • 25 Oct 2023 • Mansi Sakarvadia, Arham Khan, Aswathy Ajith, Daniel Grzenda, Nathaniel Hudson, André Bauer, Kyle Chard, Ian Foster
Transformer-based Large Language Models (LLMs) are the state-of-the-art for natural language tasks.
1 code implementation • 11 Sep 2023 • Mansi Sakarvadia, Aswathy Ajith, Arham Khan, Daniel Grzenda, Nathaniel Hudson, André Bauer, Kyle Chard, Ian Foster
Answering multi-hop reasoning questions requires retrieving and synthesizing information from diverse sources.
no code implementations • 23 May 2022 • Zhi Hong, Aswathy Ajith, Gregory Pauloski, Eamon Duede, Kyle Chard, Ian Foster
Transformer-based masked language models such as BERT, trained on general corpora, have shown impressive performance on downstream tasks.