1 code implementation • 23 Oct 2023 • Gabriele Prato, Jerry Huang, Prasannna Parthasarathi, Shagun Sodhani, Sarath Chandar
In the age of artificial intelligence, the role of large language models (LLMs) is becoming increasingly central.
no code implementations • 11 Nov 2022 • Gabriele Prato, Yale Song, Janarthanan Rajendran, R Devon Hjelm, Neel Joshi, Sarath Chandar
We show that our method is successful at enabling vision transformers to encode the temporal component of video data.
no code implementations • 13 Oct 2021 • Gabriele Prato, Simon Guiroy, Ethan Caballero, Irina Rish, Sarath Chandar
Empirical science of neural scaling laws is a rapidly growing area of significant importance to the future of machine learning, particularly in the light of recent breakthroughs achieved by large-scale pre-trained models such as GPT-3, CLIP and DALL-e.
no code implementations • 1 Jan 2021 • Gabriele Prato, Sarath Chandar
This includes left out classes from the same dataset, as well as entire datasets never trained on.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Gabriele Prato, Ella Charlaix, Mehdi Rezagholizadeh
State-of-the-art neural machine translation methods employ massive amounts of parameters.
1 code implementation • ACL 2019 • Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp
A lot of work has been done in the field of image compression via machine learning, but not much attention has been given to the compression of natural language.