Graph Property Prediction

31 papers with code • 4 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

How Attentive are Graph Attention Networks?

tech-srl/how_attentive_are_gats ICLR 2022

Because GATs use a static attention mechanism, there are simple graph problems that GAT cannot express: in a controlled problem, we show that static attention hinders GAT from even fitting the training data.

Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification

Optimization-AI/LibAUC ICCV 2021

Our studies demonstrate that the proposed DAM method improves the performance of optimizing cross-entropy loss by a large margin, and also achieves better performance than optimizing the existing AUC square loss on these medical image classification tasks.

Do Transformers Really Perform Bad for Graph Representation?

Microsoft/Graphormer 9 Jun 2021

Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.

DeeperGCN: All You Need to Train Deeper GCNs

dmlc/dgl 13 Jun 2020

Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.

Global Self-Attention as a Replacement for Graph Convolution

shamim-hussain/egt_pytorch 7 Aug 2021

The resultant framework - which we call Edge-augmented Graph Transformer (EGT) - can directly accept, process and output structural information of arbitrary form, which is important for effective learning on graph-structured data.

Recipe for a General, Powerful, Scalable Graph Transformer

rampasek/GraphGPS 25 May 2022

We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks.

Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers

shamim-hussain/tgt 7 Feb 2024

We also obtain SOTA results on QM9, MOLPCBA, and LIT-PCBA molecular property prediction benchmarks via transfer learning.

Nested Graph Neural Networks

muhanzhang/nestedgnn NeurIPS 2021

The key is to make each node representation encode a subgraph around it more than a subtree.

Global Concept Explanations for Graphs by Contrastive Learning

aimat-lab/megan_global_explanations 25 Apr 2024

Overall, our results show promising capability to extract the underlying structure-property relationships for complex graph property prediction tasks.

Wasserstein Embedding for Graph Learning

navid-naderi/WEGL ICLR 2021

We present Wasserstein Embedding for Graph Learning (WEGL), a novel and fast framework for embedding entire graphs in a vector space, in which various machine learning models are applicable for graph-level prediction tasks.