no code implementations • 23 Apr 2023 • Xiaozhe Gu, Zixun Zhang, Yuncheng Jiang, Tao Luo, Ruimao Zhang, Shuguang Cui, Zhen Li
Despite the simplicity, stochastic gradient descent (SGD)-like algorithms are successful in training deep neural networks (DNNs).
no code implementations • 6 Jun 2022 • Daniel Gerlinghoff, Zhehui Wang, Xiaozhe Gu, Rick Siow Mong Goh, Tao Luo
However, current accelerators for SNN cannot well support the emerging encoding schemes.
1 code implementation • 19 Nov 2021 • Daniel Gerlinghoff, Zhehui Wang, Xiaozhe Gu, Rick Siow Mong Goh, Tao Luo
Compiler frameworks are crucial for the widespread use of FPGA-based deep learning accelerators.
no code implementations • 14 May 2021 • Zhehui Wang, Xiaozhe Gu, Rick Goh, Joey Tianyi Zhou, Tao Luo
Traditionally, a spike train needs around one thousand time steps to approach similar accuracy as its ANN counterpart.
no code implementations • 11 Sep 2019 • Xiaozhe Gu, Arvind Easwaran
As pointed in [17], ML models work well in the "training space" (i. e., feature space with sufficient training data), but they could not extrapolate beyond the training space.