1 code implementation • 23 Aug 2023 • Fanqi Lin, Shiyu Huang, WeiWei Tu
Under such a framework, we also propose a provably efficient diversity reinforcement learning algorithm.
1 code implementation • 13 Jun 2023 • Xu Wang, Huan Zhao, WeiWei Tu, Quanming Yao
Next, to automatically fuse these three generative tasks, we design a surrogate metric using the \textit{total energy} to search for weight distribution of the three pretext task since total energy corresponding to the quality of 3D conformer. Extensive experiments on 2D molecular graphs are conducted to demonstrate the accuracy, efficiency and generalization ability of the proposed 3D PGT compared to various pre-training baselines.
no code implementations • 4 Jan 2022 • Xu Wang, Huan Zhao, WeiWei Tu, Hao Li, Yu Sun, Xiaochen Bo
Double-strand DNA breaks (DSBs) are a form of DNA damage that can cause abnormal chromosomal rearrangements.
1 code implementation • 20 Aug 2021 • Xiawei Guo, Yuhan Quan, Huan Zhao, Quanming Yao, Yong Li, WeiWei Tu
Tabular data prediction (TDP) is one of the most popular industrial applications, and various methods have been designed to improve the prediction performance.
no code implementations • 14 Apr 2021 • Huan Zhao, Quanming Yao, WeiWei Tu
In this work, to obtain the data-specific GNN architectures and address the computational challenges facing by NAS approaches, we propose a framework, which tries to Search to Aggregate NEighborhood (SANE), to automatically design data-specific GNN architectures.
no code implementations • Springer Cham 2019 • Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera, Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, Alexander Statnikov, WeiWei Tu, Evelyne Viegas
The solutions of the winners are systematically benchmarked over all datasets of all rounds and compared with canonical machine learning algorithms available in scikit-learn.
Ranked #1 on AutoML on Chalearn-AutoML-1
no code implementations • 23 Nov 2018 • Quanming Yao, Xiawei Guo, James T. Kwok, WeiWei Tu, Yuqiang Chen, Wenyuan Dai, Qiang Yang
To meet the standard of differential privacy, noise is usually added into the original data, which inevitably deteriorates the predicting performance of subsequent learning algorithms.