no code implementations • 18 May 2023 • Harshil Shah, Arthur Wilcke, Marius Cobzarenco, Cristi Cobzarenco, Edward Challis, David Barber
Natural language understanding includes the tasks of intent detection (identifying a user's objectives) and slot filling (extracting the entities relevant to those objectives).
no code implementations • 29 Apr 2021 • Anjana Deva Prasad, Aditya Balu, Harshil Shah, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy
These derivatives are used to define an approximate Jacobian used for performing the "backward" evaluation to train the deep learning models.
no code implementations • 30 Mar 2021 • Harshil Shah, Tim Xiao, David Barber
Linear chain conditional random fields (CRFs) combined with contextual word embeddings have achieved state of the art performance on sequence labeling tasks.
no code implementations • 1 Jan 2021 • Harshil Shah, David Barber
However, active learning methods usually use supervised training and ignore the data points which have not yet been labelled.
1 code implementation • EMNLP (sustainlp) 2020 • Harshil Shah, Julien Fauqueur
In both cases, recent methods have achieved strong results by learning a point estimate to represent the relation; this is then used as the input to a relation classifier.
no code implementations • 22 Feb 2019 • Sindhu Ghanta, Sriram Subramanian, Lior Khermosh, Harshil Shah, Yakov Goldberg, Swaminathan Sundararaman, Drew Roselli, Nisha Talagala
We argue that an ensemble of such metrics can be used to create a score representing the prediction quality in production.
no code implementations • 7 Feb 2019 • Sindhu Ghanta, Sriram Subramanian, Lior Khermosh, Swaminathan Sundararaman, Harshil Shah, Yakov Goldberg, Drew Roselli, Nisha Talagala
Deployment of machine learning (ML) algorithms in production for extended periods of time has uncovered new challenges such as monitoring and management of real-time prediction quality of a model in the absence of labels.
no code implementations • NeurIPS 2018 • Harshil Shah, David Barber
We introduce Generative Neural Machine Translation (GNMT), a latent variable architecture which is designed to model the semantics of the source and target sentences.
no code implementations • 13 Jun 2018 • Harshil Shah, Bowen Zheng, David Barber
We introduce the Attentive Unsupervised Text (W)riter (AUTR), which is a word level generative model for natural language.