1 code implementation • 8 Feb 2024 • Meng-Chieh Lee, Lingxiao Zhao, Leman Akoglu
In this paper, we first revisit the RWK and its current usage in KCNs, revealing several shortcomings of the existing designs, and propose an improved graph kernel RWK+, by introducing color-matching random walks and deriving its efficient computation.
1 code implementation • 6 Feb 2024 • Lingxiao Zhao, Xueying Ding, Leman Akoglu
Current graph diffusion models generate graphs in a one-shot fashion, but they require extra features and thousands of denoising steps to achieve optimal performance.
2 code implementations • 6 Feb 2024 • Lingxiao Zhao, Xueying Ding, Lijun Yu, Leman Akoglu
Discrete diffusion models have seen a surge of attention with applications on naturally discrete data such as language and graphs.
1 code implementation • 13 Nov 2023 • Konstantinos Sotiropoulos, Lingxiao Zhao, Pierre Jinghong Liang, Leman Akoglu
Given a complex graph database of node- and edge-attributed multi-graphs as well as associated metadata for each graph, how can we spot the anomalous instances?
1 code implementation • 13 Jul 2023 • Jaemin Yoo, Yue Zhao, Lingxiao Zhao, Leman Akoglu
DSV captures the alignment between an augmentation function and the anomaly-generating mechanism with surrogate losses, which approximate the discordance and separability of test data, respectively.
no code implementations • 21 Jun 2023 • Jaemin Yoo, Lingxiao Zhao, Leman Akoglu
The first is a new unsupervised validation loss that quantifies the alignment between the augmented training data and the (unlabeled) test data.
1 code implementation • 18 Oct 2022 • Lingxiao Zhao, Louis Härtel, Neil Shah, Leman Akoglu
Our model is practical and progressively-expressive, increasing in power with k and c. We demonstrate effectiveness on several benchmark datasets, achieving several state-of-the-art results with runtime and memory usage applicable to practical graphs.
1 code implementation • 18 Oct 2022 • Lingxiao Zhao, Saurabh Sawlani, Arvind Srinivasan, Leman Akoglu
This work aims to fill two gaps in the literature: We (1) design GLAM, an end-to-end graph-level anomaly detection model based on GNNs, and (2) focus on unsupervised model selection, which is notoriously hard due to lack of any labels, yet especially critical for deep NN based models with a long list of hyper-parameters.
1 code implementation • 15 Jun 2022 • Xueying Ding, Lingxiao Zhao, Leman Akoglu
Outlier detection (OD) literature exhibits numerous algorithms as it applies to diverse domains.
2 code implementations • 25 Feb 2022 • Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, Stefanie Jegelka
We introduce SignNet and BasisNet -- new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if $v$ is an eigenvector then so is $-v$; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors.
Ranked #11 on Graph Regression on ZINC-500k
2 code implementations • ICLR 2022 • Wei Jin, Lingxiao Zhao, Shichang Zhang, Yozen Liu, Jiliang Tang, Neil Shah
Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns.
1 code implementation • 11 Oct 2021 • Saurabh Sawlani, Lingxiao Zhao, Leman Akoglu
We propose A-DOGE, for Attributed DOS-based Graph Embedding, based on density of states (DOS, a. k. a.
2 code implementations • ICLR 2022 • Lingxiao Zhao, Wei Jin, Leman Akoglu, Neil Shah
We choose the subgraph encoder to be a GNN (mainly MPNNs, considering scalability) to design a general framework that serves as a wrapper to up-lift any GNN.
Ranked #16 on Graph Property Prediction on ogbg-molpcba
no code implementations • 29 Sep 2021 • Lingxiao Zhao, Leman Akoglu
Based on this connection, the GCN architecture, shaped by stacking graph convolution layers, shares a close relationship with stacking GPCA.
1 code implementation • 23 Dec 2020 • Lingxiao Zhao, Leman Akoglu
We carefully study the graph embedding space produced by propagation based models and find two driving factors: (1) disparity between within-class densities which is amplified by propagation, and (2)overlapping support (mixing of embeddings) across classes.
no code implementations • 22 Jun 2020 • Lingxiao Zhao, Leman Akoglu
Based on this connection, GCN architecture, shaped by stacking graph convolution layers, shares a close relationship with stacking GPCA.
4 code implementations • NeurIPS 2020 • Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, Danai Koutra
We investigate the representation power of graph neural networks in the semi-supervised node classification task under heterophily or low homophily, i. e., in networks where connected nodes may have different class labels and dissimilar features.
no code implementations • 1 Jun 2020 • Siheng Chen, Yonina C. Eldar, Lingxiao Zhao
We unroll an iterative denoising algorithm by mapping each iteration into a single network layer where the feed-forward process is equivalent to iteratively denoising graph signals.
1 code implementation • ICLR 2020 • Lingxiao Zhao, Leman Akoglu
The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers.
1 code implementation • 26 Sep 2019 • Xuan Wu, Lingxiao Zhao, Leman Akoglu
As such, PG-learn is a carefully-designed hybrid of random and adaptive search.