1 code implementation • 23 Oct 2022 • Vin Sachidananda, ZiYi Yang, Chenguang Zhu
Contrastive Learning has recently achieved state-of-the-art performance in a wide range of tasks.
no code implementations • 8 Feb 2022 • Vin Sachidananda, Shao-Yen Tseng, Erik Marchi, Sachin Kajarekar, Panayiotis Georgiou
By aligning audio representations to pretrained language representations and utilizing contrastive information between acoustic inputs, CALM is able to bootstrap audio embedding competitive with existing audio representation models in only a few hours of training time.
no code implementations • EMNLP (sustainlp) 2021 • Vin Sachidananda, Jason S. Kessler, Yi-An Lai
While adaptive tokenization incurs a 6% increase in model parameters in our experimentation, due to the introduction of 10k new domain-specific tokens, our approach, using 64 vCPUs, is 72x faster than further pretraining the language model on domain-specific corpora on 8 TPUs.
no code implementations • ICLR 2021 • Vin Sachidananda, ZiYi Yang, Chenguang Zhu
Due to widespread interest in machine translation and transfer learning, there are numerous algorithms for mapping multiple embeddings to a shared representation space.
no code implementations • ACL 2019 • Ziyi Yang, Chenguang Zhu, Vin Sachidananda, Eric Darve
In this paper, we propose an approach for embedding imputation which uses grounded information in the form of a knowledge graph.
2 code implementations • NeurIPS 2018 • Zi Yin, Vin Sachidananda, Balaji Prabhakar
We show both theoretically and empirically that the global anchor method is equivalent to the alignment method, a widely-used method for comparing word embeddings, in terms of detecting corpus-level language shifts.