no code implementations • 29 Apr 2024 • Saeed Damadi, Soroush Zolfaghari, Mahdi Rezaie, Jinglai Shen
This paper aims to investigate whether the theoretical prerequisites for such convergence are applicable in the realm of neural network (NN) training by providing justification for all the necessary conditions for convergence.
no code implementations • 22 Jan 2023 • Saeed Damadi, Golnaz Moharrer, Mostafa Cham
A Deep Neural Network (DNN) is a composite function of vector-valued functions, and in order to train a DNN, it is necessary to calculate the gradient of the loss function with respect to all parameters.
no code implementations • 29 Sep 2022 • Saeed Damadi, Jinglai Shen
To the best of our knowledge, in the regime of sparse optimization, this is the first time in the literature that it is shown that the sequence of the stochastic function values converges with probability one by fixing the mini-batch size for all steps.
no code implementations • 18 Feb 2022 • Saeed Damadi, Erfan Nouri, Hamed Pirsiavash
ASNI-II learns a sparse network and an initialization that is quantized, compressed, and from which the sparse network is trainable.