no code implementations • ECCV 2020 • Kunyuan Du, Ya zhang, Haibing Guan, Qi Tian, Shenggan Cheng, James Lin
Compared with low-bit models trained directly, the proposed framework brings 0. 5% to 3. 4% accuracy gains to three different quantization schemes.
2 code implementations • 11 Nov 2021 • Bozitao Zhong, Xiaoming Su, Minhua Wen, Sichen Zuo, Liang Hong, James Lin
We evaluated the accuracy and efficiency of optimizations on CPUs and GPUs, and showed the large-scale prediction capability by running ParaFold inferences of 19, 704 small proteins in five hours on one NVIDIA DGX-2.
1 code implementation • 22 Oct 2020 • Jan Christian Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco, Charibeth Cheng
Lastly, we perform analyses on transfer learning techniques to shed light on their true performance when operating in low-data domains through the use of degradation tests.
no code implementations • 31 Jan 2020 • James Lin, Kevin Kilgour, Dominik Roblek, Matthew Sharifi
With the rise of low power speech-enabled devices, there is a growing demand to quickly produce models for recognizing arbitrary sets of keywords.
Ranked #10 on Keyword Spotting on Google Speech Commands (Google Speech Commands V2 12 metric)