no code implementations • 1 Apr 2024 • Griffin Adams
We improve both coverage and faithfulness by performing sentence-level entity planning based on a set of pre-computed salient entities from the source text, which extends our work on entity-guided news summarization [ACL, 2023], [EMNLP, 2023].
no code implementations • 4 Jan 2024 • Griffin Adams, Jason Zucker, Noémie Elhadad
To increase entity coverage, we train a smaller, encoder-only model to predict salient entities, which are treated as content-plans to guide the LLM.
no code implementations • 8 Sep 2023 • Griffin Adams, Alexander Fabbri, Faisal Ladhak, Eric Lehman, Noémie Elhadad
We conduct a human preference study on 100 CNN DailyMail articles and find that that humans prefer GPT-4 summaries that are more dense than those generated by a vanilla prompt and almost as dense as human written summaries.
1 code implementation • 28 May 2023 • Griffin Adams, Alexander R. Fabbri, Faisal Ladhak, Kathleen McKeown, Noémie Elhadad
Similarly, on 1k samples from CNN / DM, we show that prompting GPT-3 to follow EDU plans outperforms sampling-based methods by 1. 05 ROUGE-2 F1 points.
1 code implementation • 12 May 2023 • Griffin Adams, Bichlien H Nguyen, Jake Smith, Yingce Xia, Shufang Xie, Anna Ostropolets, Budhaditya Deb, Yuan-Jyue Chen, Tristan Naumann, Noémie Elhadad
Summarization models often generate text that is poorly calibrated to quality metrics because they are trained to maximize the likelihood of a single reference (MLE).
no code implementations • 7 Mar 2023 • Griffin Adams, Jason Zucker, Noémie Elhadad
To better understand the limitations of abstractive systems, as well as the suitability of existing evaluation metrics, we benchmark faithfulness metrics against fine-grained human annotations for model-generated summaries of a patient's Brief Hospital Course.
1 code implementation • 13 Apr 2022 • Griffin Adams, Han-Chin Shing, Qing Sun, Christopher Winestock, Kathleen McKeown, Noémie Elhadad
In real-world scenarios with naturally occurring datasets, reference summaries are noisy and may contain information that cannot be inferred from the source text.
no code implementations • NAACL 2021 • Griffin Adams, Emily Alsentzer, Mert Ketenci, Jason Zucker, Noémie Elhadad
Summarization of clinical narratives is a long-standing research problem.
1 code implementation • 8 Dec 2020 • Griffin Adams, Sarguna Janani Padmanabhan, Shivang Shekhar
We address two major challenges of implicit coordination in multi-agent deep reinforcement learning: non-stationarity and exponential growth of state-action space, by combining Deep-Q Networks for policy learning with Nash equilibrium for action selection.
1 code implementation • 29 Sep 2020 • Griffin Adams, Mert Ketenci, Shreyas Bhave, Adler Perotte, Noémie Elhadad
We introduce Latent Meaning Cells, a deep latent variable model which learns contextualized representations of words by combining local lexical context and metadata.
no code implementations • 30 Nov 2018 • Monica Agrawal, Griffin Adams, Nathan Nussbaum, Benjamin Birnbaum
In this work, we present TIFTI (Temporally Integrated Framework for Treatment Intervals), a robust framework for extracting oral drug treatment intervals from a patient's unstructured notes.