1 code implementation • 26 Jan 2024 • Dexiong Chen, Philip Hartout, Paolo Pellizzoni, Carlos Oliver, Karsten Borgwardt
Drawing from recent advances in graph transformers, our approach refines the self-attention mechanisms of pretrained language transformers by integrating structural information with structure extractor modules.
1 code implementation • 12 May 2023 • Dexiong Chen, Paolo Pellizzoni, Karsten Borgwardt
Attention-based graph neural networks (GNNs), such as graph attention networks (GATs), have become popular neural architectures for processing graph-structured data and learning node embeddings.
1 code implementation • 7 Jan 2022 • Paolo Pellizzoni, Andrea Pietracaprina, Geppino Pucci
We provide efficient algorithms for this important variant in the streaming model under the sliding window setting, where, at each time step, the dataset to be clustered is the window $W$ of the most recent data items.