no code implementations • CVPR 2022 • Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey
Deep learning researchers have a keen interest in proposing new novel activation functions that can boost neural network performance.
3 code implementations • 8 Nov 2021 • Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey
A good choice of activation function can have significant consequences in improving network performance.
no code implementations • 27 Sep 2021 • Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey
Well-known activation functions like ReLU or Leaky ReLU are non-differentiable at the origin.
no code implementations • 9 Sep 2021 • Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey
An activation function is a crucial component of a neural network that introduces non-linearity in the network.
no code implementations • 17 Jun 2021 • Koushik Biswas, Shilpak Banerjee, Ashish Kumar Pandey
We have proposed orthogonal-Pad\'e activation functions, which are trainable activation functions and show that they have faster learning capability and improves the accuracy in standard deep learning datasets and models.
no code implementations • 28 Sep 2020 • Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey
In recent years, several novel activation functions arising from these basic functions have been proposed, which have improved accuracy in some challenging datasets.
no code implementations • 8 Sep 2020 • Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey
Deep learning at its core, contains functions that are composition of a linear transformation with a non-linear function known as activation function.