1 code implementation • 6 May 2024 • Xingcheng Fu, Yisen Gao, Yuecen Wei, Qingyun Sun, Hao Peng, JianXin Li, Xianxian Li
Diffusion models have made significant contributions to computer vision, sparking a growing interest in the community recently regarding the application of them to graph generation.
no code implementations • 28 Feb 2024 • Feihong Lu, Weiqi Wang, Yangyifei Luo, Ziqin Zhu, Qingyun Sun, Baixuan Xu, Haochen Shi, Shiqi Gao, Qian Li, Yangqiu Song, JianXin Li
However, understanding the intention behind social media posts remains challenging due to the implicitness of intentions in social media posts, the need for cross-modality understanding of both text and images, and the presence of noisy information such as hashtags, misspelled words, and complicated abbreviations.
1 code implementation • 9 Feb 2024 • Haonan Yuan, Qingyun Sun, Xingcheng Fu, Cheng Ji, JianXin Li
Leveraged by the Information Bottleneck (IB) principle, we first propose the expected optimal representations should satisfy the Minimal-Sufficient-Consensual (MSC) Condition.
no code implementations • 19 Dec 2023 • Yuecen Wei, Haonan Yuan, Xingcheng Fu, Qingyun Sun, Hao Peng, Xianxian Li, Chunming Hu
Specifically, PoinDP first learns the hierarchy weights for each entity based on the Poincar\'e model in hyperbolic space.
1 code implementation • NeurIPS 2023 • Haonan Yuan, Qingyun Sun, Xingcheng Fu, Ziwei Zhang, Cheng Ji, Hao Peng, JianXin Li
To the best of our knowledge, we are the first to study OOD generalization on dynamic graphs from the environment learning perspective.
2 code implementations • NeurIPS 2023 • Beining Yang, Kai Wang, Qingyun Sun, Cheng Ji, Xingcheng Fu, Hao Tang, Yang You, JianXin Li
We validate the proposed SGDD across 9 datasets and achieve state-of-the-art results on all of them: for example, on the YelpChi dataset, our approach maintains 98. 6% test accuracy of training on the original graph dataset with 1, 000 times saving on the scale of the graph.
1 code implementation • 11 Apr 2023 • Xingcheng Fu, Yuecen Wei, Qingyun Sun, Haonan Yuan, Jia Wu, Hao Peng, JianXin Li
We find that training labeled nodes with different hierarchical properties have a significant impact on the node classification tasks and confirm it in our experiments.
1 code implementation • 28 Jan 2023 • Cheng Ji, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Qingyun Sun, Phillip S. Yu
Contrastive Learning (CL) has been proved to be a powerful self-supervised approach for a wide range of domains, including computer vision and graph representation learning.
no code implementations • 30 Dec 2022 • Qingyun Sun, JianXin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu
Most Graph Neural Networks follow the message-passing paradigm, assuming the observed structure depicts the ground-truth node relationships.
no code implementations • 15 Nov 2022 • Qian Li, JianXin Li, Lihong Wang, Cheng Ji, Yiming Hei, Jiawei Sheng, Qingyun Sun, Shan Xue, Pengtao Xie
To address the above issues, we propose a Multi-Channel graph neural network utilizing Type information for Event Detection in power systems, named MC-TED, leveraging a semantic channel and a topological channel to enrich information interaction from short texts.
1 code implementation • 2 Oct 2022 • Yuecen Wei, Xingcheng Fu, Qingyun Sun, Hao Peng, Jia Wu, Jinyan Wang, Xianxian Li
To address this issue, we propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP, which provides a double guarantee on graph features and topology.
1 code implementation • 17 Aug 2022 • Qingyun Sun, JianXin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng Ji, Qian Li, Philip S. Yu
Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes, which significantly damages the performance of GNNs.
2 code implementations • 9 Aug 2022 • Ruitong Zhang, Hao Peng, Yingtong Dou, Jia Wu, Qingyun Sun, Jingyi Zhang, Philip S. Yu
DBSCAN is widely used in many scientific and engineering fields because of its simplicity and practicality.
1 code implementation • 3 Mar 2022 • JianXin Li, Xingcheng Fu, Qingyun Sun, Cheng Ji, Jiajun Tan, Jia Wu, Hao Peng
In this paper, we proposed a novel Curvature Graph Generative Adversarial Networks method, named \textbf{\modelname}, which is the first GAN-based graph representation method in the Riemannian geometric manifold.
1 code implementation • 16 Dec 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, Philip S. Yu
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications.
1 code implementation • 15 Oct 2021 • Xingcheng Fu, JianXin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng, Philip S. Yu
Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning.
no code implementations • 13 Jun 2021 • Qingyun Sun, David Donoho
To bridge the gulf between reported successes and theory's limited understanding, we exhibit a convex optimization problem that -- assuming signal sparsity -- can convert a crude approximation to the true filter into a high-accuracy recovery of the true filter.
1 code implementation • 22 May 2021 • JianXin Li, Xingcheng Fu, Hao Peng, Senzhang Wang, Shijie Zhu, Qingyun Sun, Philip S. Yu, Lifang He
With the prevalence of graph data in real-world applications, many methods have been proposed in recent years to learn high-quality graph embedding vectors various types of graphs.
no code implementations • 12 Apr 2021 • Kenji Kawaguchi, Qingyun Sun
Existing global convergence guarantees of (stochastic) gradient descent do not apply to practical deep networks in the practical regime of deep learning beyond the neural tangent kernel (NTK) regime.
1 code implementation • 20 Jan 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Yuanxing Ning, Phillip S. Yu, Lifang He
Graph representation learning has attracted increasing research attention.
1 code implementation • 18 Sep 2020 • Lin Shao, Yifan You, Mengyuan Yan, Qingyun Sun, Jeannette Bohg
One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function.
no code implementations • 30 Aug 2020 • Qingyun Sun, Hao Peng, Jian-Xin Li, Senzhang Wang, Xiangyu Dong, Liangxuan Zhao, Philip S. Yu, Lifang He
Although these attributes may change, an author's co-authors and research topics do not change frequently with time, which means that papers within a period have similar text and relation information in the academic network.
no code implementations • 14 May 2020 • Mengyuan Yan, Qingyun Sun, Iuri Frosio, Stephen Tyree, Jan Kautz
Combining the control policy learned from simulation with the perception model, we achieve an impressive $\bf{88\%}$ success rate in grasping a tiny sphere with a real robot.
Robotics
no code implementations • 7 Mar 2020 • Xiang Zhou, Huizhuo Yuan, Chris Junchi Li, Qingyun Sun
In this work, we put different variants of stochastic ADMM into a unified form, which includes standard, linearized and gradient-based ADMM with relaxation, and study their dynamics via a continuous-time model approach.
no code implementations • 10 Jun 2019 • Morteza Mardani, Qingyun Sun, Vardan Papyan, Shreyas Vasanawala, John Pauly, David Donoho
Leveraging the Stein's Unbiased Risk Estimator (SURE), this paper analyzes the generalization risk with its bias and variance components for recurrent unrolled networks.
1 code implementation • NeurIPS 2018 • Morteza Mardani, Qingyun Sun, Shreyas Vasawanala, Vardan Papyan, Hatef Monajemi, John Pauly, David Donoho
Recovering high-resolution images from limited sensory data typically leads to a serious ill-posed inverse problem, demanding inversion algorithms that effectively capture the prior information.
3 code implementations • CVPR 2018 • Wangpeng An, Haoqian Wang, Qingyun Sun, Jun Xu, Qionghai Dai, Lei Zhang
We first reveal the intrinsic connections between SGD-Momentum and PID based controller, then present the optimization algorithm which exploits the past, current, and change of gradients to update the network parameters.
no code implementations • ICML 2018 • Qingyun Sun, Mengyuan Yan David Donoho, Stephen Boyd
A matrix network is a family of matrices, with relatedness modeled by a weighted graph.
no code implementations • 10 Mar 2016 • Sam Ganzfried, Qingyun Sun
The natural setting for opponent exploitation is the Bayesian setting where we have a prior model that is integrated with observations to create a posterior opponent model that we respond to.