no code implementations • 15 Apr 2024 • Nachuan Xiao, Kuangyu Ding, Xiaoyin Hu, Kim-Chuan Toh
Preliminary numerical experiments on deep learning tasks illustrate that our proposed framework yields efficient variants of Lagrangian-based methods with convergence guarantees for nonconvex nonsmooth constrained optimization problems.
no code implementations • 19 Jul 2023 • Nachuan Xiao, Xiaoyin Hu, Kim-Chuan Toh
We further illustrate that our scheme yields variants of SGD-type methods, which enjoy guaranteed convergence in training nonsmooth neural networks.
no code implementations • 6 May 2023 • Nachuan Xiao, Xiaoyin Hu, Xin Liu, Kim-Chuan Toh
In this paper, we present a comprehensive study on the convergence properties of Adam-family methods for nonsmooth optimization, especially in the training of nonsmooth neural networks.