no code implementations • 27 Jun 2023 • Fu-Ming Guo
This paper introduces SparseOptimizer, a novel deep learning optimizer that exploits Moreau-Yosida regularization to naturally induce sparsity in large language models such as BERT, ALBERT and GPT.
no code implementations • 30 May 2022 • Fu-Ming Guo, Yingfang Fan
Histology is an essential tool for lung cancer diagnosis.
1 code implementation • 16 Dec 2021 • Nikhil Maddikunta, Huijun Zhao, Sumit Keswani, Alfy Samuel, Fu-Ming Guo, Nishan Srishankar, Vishwa Pardeshi, Austin Huang
In the past, computer vision systems for digitized documents could rely on systematically captured, high-quality scans.
no code implementations • 16 Jun 2021 • Fu-Ming Guo, Austin Huang
Integration of BSR operations enables the TVM runtime execution to leverage structured pattern sparsity induced by model regularization.
no code implementations • 27 Sep 2019 • Fu-Ming Guo, Sijia Liu, Finlay S. Mungall, Xue Lin, Yanzhi Wang
Is it possible to compress these large-scale language representation models?
no code implementations • 6 Sep 2019 • Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang
Model compression techniques on Deep Neural Network (DNN) have been widely acknowledged as an effective way to achieve acceleration on a variety of platforms, and DNN weight pruning is a straightforward and effective method.