no code implementations • EMNLP (insights) 2020 • Catherine Finegan-Dollak, Ashish Verma
Clustering documents by type—grouping invoices with invoices and articles with articles—is a desirable first step for organizing large collections of document scans.
no code implementations • 3 Oct 2023 • Ritesh Kumar, Saurabh Goyal, Ashish Verma, Vatche Isahagian
\\ We present \textbf{ProtoNER}: Prototypical Network based end-to-end KVP extraction model that allows addition of new classes to an existing model while requiring minimal number of newly annotated training samples.
no code implementations • 7 Aug 2023 • Rahul Atul Bhope, K. R. Jayaram, Nalini Venkatasubramanian, Ashish Verma, Gegi Thomas
In particular, we examine the benefits of label distribution clustering on participant selection in federated learning.
1 code implementation • 26 May 2023 • Fnu Mohbat, Mohammed J. Zaki, Catherine Finegan-Dollak, Ashish Verma
Visual document classifiers have shown impressive performance on in-distribution test sets.
no code implementations • 1 Sep 2021 • Anik Saha, Catherine Finegan-Dollak, Ashish Verma
Natural language processing for document scans and PDFs has the potential to enormously improve the efficiency of business processes.
no code implementations • 19 May 2021 • Pau-Chen Cheng, Kevin Eykholt, Zhongshu Gu, Hani Jamjoom, K. R. Jayaram, Enriquillo Valdez, Ashish Verma
In this paper, we introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture to break down information concentration with regard to a single aggregator.
no code implementations • 1 Mar 2021 • Devansh Shah, Parijat Dube, Supriyo Chakraborty, Ashish Verma
We observe a significant drop in both natural and adversarial accuracies when AT is used in the federated setting as opposed to centralized training.
no code implementations • 1 Dec 2020 • K. R. Jayaram, Archit Verma, Ashish Verma, Gegi Thomas, Colin Sutcher-Shepard
Federated learning enables multiple, distributed participants (potentially on different clouds) to collaborate and train machine/deep learning models by sharing parameters/gradients.
1 code implementation • 22 Jul 2020 • Heiko Ludwig, Nathalie Baracaldo, Gegi Thomas, Yi Zhou, Ali Anwar, Shashank Rajamoni, Yuya Ong, Jayaram Radhakrishnan, Ashish Verma, Mathieu Sinn, Mark Purcell, Ambrish Rawat, Tran Minh, Naoise Holohan, Supriyo Chakraborty, Shalisha Whitherspoon, Dean Steuer, Laura Wynter, Hifaz Hassan, Sean Laguna, Mikhail Yurochkin, Mayank Agarwal, Ebube Chuba, Annie Abay
Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single place, for reasons of privacy, confidentiality or data volume.
no code implementations • 24 Jun 2020 • Vaibhav Saxena, K. R. Jayaram, Saurav Basu, Yogish Sabharwal, Ashish Verma
We design a fast dynamic programming based optimizer to solve this problem in real-time to determine jobs that can be scaled up/down, and use this optimizer in an autoscaler to dynamically change the allocated resources and batch sizes of individual DL jobs.
no code implementations • 11 Feb 2020 • Sidharth Gupta, Parijat Dube, Ashish Verma
Projected Gradient Descent (PGD) based adversarial training has become one of the most prominent methods for building robust deep neural network models.
1 code implementation • ICML 2020 • Saurabh Goyal, Anamitra R. Choudhury, Saurabh M. Raje, Venkatesan T. Chakaravarthy, Yogish Sabharwal, Ashish Verma
We demonstrate that our method attains up to 6. 8x reduction in inference time with <1% loss in accuracy when applied over ALBERT, a highly compressed version of BERT.
no code implementations • 25 Oct 2019 • Koyel Mukherjee, Alind Khare, Ashish Verma
Training neural networks on image datasets generally require extensive experimentation to find the optimal learning rate regime.
no code implementations • 20 Oct 2018 • Saurabh Goyal, Anamitra R Choudhury, Vivek Sharma, Yogish Sabharwal, Ashish Verma
Large number of weights in deep neural networks make the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as "inferencing as a service" environments on the cloud.
1 code implementation • 17 Apr 2018 • Ashish Verma, Kranthi Koukuntla, Rohit Varma, Snehasis Mukherjee
The dataset contains some images captured by professional photographers and the rest of the images captured by common people.
no code implementations • 1 Nov 2017 • Dharma Teja Vooturi, Saurabh Goyal, Anamitra R. Choudhury, Yogish Sabharwal, Ashish Verma
Large number of weights in deep neural networks makes the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as "inferencing as a service" environments on cloud.