no code implementations • 28 Aug 2023 • Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
While the recommendation system (RS) has advanced significantly through deep learning, current RS approaches usually train and fine-tune models on task-specific datasets, limiting their generalizability to new recommendation tasks and their ability to leverage external knowledge due to model scale and data size constraints.
no code implementations • 9 Jan 2022 • Arindam Banerjee, Tiancong Chen, Xinyan Li, Yingxue Zhou
Recent years have seen advances in generalization bounds for noisy stochastic algorithms, especially stochastic gradient Langevin dynamics (SGLD) based on stability (Mou et al., 2018; Li et al., 2020) and information theoretic approaches (Xu and Raginsky, 2017; Negrea et al., 2019; Steinke and Zakynthinou, 2020).
no code implementations • 26 Feb 2021 • Yingxue Zhou, Xinyan Li, Arindam Banerjee
Our experiments on a variety of benchmark datasets (MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100) with various networks (VGG and ResNet) validate the theoretical properties of NT-SGD, i. e., NT-SGD matches the speed and accuracy of vanilla SGD while effectively working with sparse gradients, and can successfully escape poor local minima.
no code implementations • NeurIPS 2020 • Yingxue Zhou, Belhal Karimi, Jinxing Yu, Zhiqiang Xu, Ping Li
Adaptive gradient methods such as AdaGrad, RMSprop and Adam have been optimizers of choice for deep learning due to their fast training speed.
no code implementations • ICLR 2021 • Yingxue Zhou, Zhiwei Steven Wu, Arindam Banerjee
Existing lower bounds on private ERM show that such dependence on $p$ is inevitable in the worst case.
no code implementations • 24 Jun 2020 • Yingxue Zhou, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, Arindam Banerjee
We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam.
no code implementations • 23 Feb 2020 • Arindam Banerjee, Tiancong Chen, Yingxue Zhou
Existing approaches for deterministic non-smooth deep nets typically need to bound the Lipschitz constant of such deep nets but such bounds are quite large, may even increase with the training set size yielding vacuous generalization bounds.
no code implementations • 24 Jul 2019 • Xinyan Li, Qilong Gu, Yingxue Zhou, Tiancong Chen, Arindam Banerjee
(2) how can we characterize the stochastic optimization dynamics of SGD with fixed and adaptive step sizes and diagonal pre-conditioning based on the first and second moments of SGs?
no code implementations • 21 Feb 2016 • Chencheng Li, Pan Zhou, Yingxue Zhou, Kaigui Bian, Tao Jiang, Susanto Rahardja
An increasing number of people participate in social networks and massive online social data are obtained.
no code implementations • 1 Sep 2015 • Pan Zhou, Yingxue Zhou, Dapeng Wu, Hai Jin
In addition, none of them has considered both the privacy of users' contexts (e, g., social status, ages and hobbies) and video service vendors' repositories, which are extremely sensitive and of significant commercial value.