1 code implementation • 15 Nov 2022 • Sourya Dey, Eric Davis
We present DLKoopman -- a software package for Koopman theory that uses deep learning to learn an encoding of a nonlinear dynamical system into a linear space, while simultaneously learning the linear dynamics.
no code implementations • 26 Jan 2022 • Sourya Dey, Walt Woods
This paper presents LAGOON -- an open source platform for understanding the complex ecosystems of Open Source Software (OSS) communities.
2 code implementations • 27 Mar 2020 • Sourya Dey, Saikrishna C. Kanala, Keith M. Chugg, Peter A. Beerel
In particular, we show the superiority of a greedy strategy and justify our choice of Bayesian optimization as the primary search methodology over random / grid search.
2 code implementations • 4 Dec 2018 • Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg
Neural networks have proven to be extremely powerful tools for modern artificial intelligence applications, but computational and storage complexity remain limiting factors.
2 code implementations • 11 Jul 2018 • Sourya Dey, Keith M. Chugg, Peter A. Beerel
The algorithm and datasets are open-source.
1 code implementation • 31 May 2018 • Sourya Dey, Diandian Chen, Zongyang Li, Souvik Kundu, Kuan-Wen Huang, Keith M. Chugg, Peter A. Beerel
We demonstrate an FPGA implementation of a parallel and reconfigurable architecture for sparse neural networks, capable of on-chip training and inference.
no code implementations • 18 Nov 2017 • Sourya Dey, Peter A. Beerel, Keith M. Chugg
We propose a class of interleavers for a novel deep neural network (DNN) architecture that uses algorithmically pre-determined, structured sparsity to significantly lower memory and computational requirements, and speed up training.
no code implementations • 16 Nov 2017 • Sourya Dey
We designed a multilayer perceptron neural network to predict the price of a football (soccer) player using data on more than 15, 000 players from the football simulation video game FIFA 2017.
no code implementations • ICLR 2018 • Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg
We propose a novel way of reducing the number of parameters in the storage-hungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training.
no code implementations • 3 Nov 2017 • Sourya Dey, Yinan Shao, Keith M. Chugg, Peter A. Beerel
We propose a reconfigurable hardware architecture for deep neural networks (DNNs) capable of online training and inference, which uses algorithmically pre-determined, structured sparsity to significantly lower memory and computational requirements.