1 code implementation • NeurIPS 2023 • Richard Antonello, Aditya Vaidya, Alexander G. Huth
Representations from transformer-based unidirectional language models are known to be effective at predicting brain responses to natural language.
2 code implementations • 17 May 2023 • Chandan Singh, Aliyah R. Hsu, Richard Antonello, Shailee Jain, Alexander G. Huth, Bin Yu, Jianfeng Gao
Here, we ask whether we can automatically obtain natural language explanations for black box text modules.
no code implementations • ACL 2021 • Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth
Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.
1 code implementation • NeurIPS 2021 • Richard Antonello, Javier Turek, Vy Vo, Alexander Huth
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
1 code implementation • 1 May 2020 • Richard Antonello, Nicole Beckage, Javier Turek, Alexander Huth
Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final performance of language model fine-tuning.