no code implementations • 7 Dec 2023 • Matthias Fey, Weihua Hu, Kexin Huang, Jan Eric Lenssen, Rishabh Ranjan, Joshua Robinson, Rex Ying, Jiaxuan You, Jure Leskovec
The core idea is to view relational databases as a temporal, heterogeneous graph, with a node for each row in each table, and edges specified by primary-foreign key links.
1 code implementation • NeurIPS 2023 • Derek Lim, Joshua Robinson, Stefanie Jegelka, Haggai Maron
In this work, we demonstrate the benefits of sign equivariance for these tasks.
no code implementations • 16 Nov 2023 • Ting-Rui Chiang, Xinyan Velocity Yu, Joshua Robinson, Ollie Liu, Isabelle Lee, Dani Yogatama
Augmenting a language model (LM) with $k$-nearest neighbors ($k$NN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive.
1 code implementation • 4 Oct 2023 • Yinan Huang, William Lu, Joshua Robinson, Yu Yang, Muhan Zhang, Stefanie Jegelka, Pan Li
Despite many attempts to address non-uniqueness, most methods overlook stability, leading to poor generalization on unseen graph structures.
Molecular Property Prediction Out-of-Distribution Generalization +1
1 code implementation • 24 Jun 2023 • Sharut Gupta, Joshua Robinson, Derek Lim, Soledad Villar, Stefanie Jegelka
Specifically, in the contrastive learning setting, we introduce an equivariance objective and theoretically prove that its minima forces augmentations on input space to correspond to rotations on the spherical embedding space.
no code implementations • 5 Jan 2023 • Mahmoud E. Khani, Ethan M. I. Johnson, Aparna Sodhi, Joshua Robinson, Cynthia K. Rigsby, Bradly D. Allen, Michael Markl
We also investigated the ability of this deep learning technique to differentiate between patients diagnosed with aortic valve stenosis (AS), non-AS patients with a bicuspid aortic valve (BAV), non-AS patients with a mechanical aortic valve (MAV), and healthy subjects with a normal tricuspid aortic valve (TAV).
1 code implementation • 30 Oct 2022 • Shlok Mishra, Joshua Robinson, Huiwen Chang, David Jacobs, Aaron Sarna, Aaron Maschinot, Dilip Krishnan
Our framework is a minimal and conceptually clean synthesis of (C) contrastive learning, (A) masked autoencoders, and (N) the noise prediction approach used in diffusion models.
1 code implementation • 22 Oct 2022 • Joshua Robinson, Christopher Michael Rytting, David Wingate
A more natural prompting approach is to present the question and answer options to the LLM jointly and have it output the symbol (e. g., "A") associated with its chosen answer option.
1 code implementation • 8 Aug 2022 • Nikolaos Karalias, Joshua Robinson, Andreas Loukas, Stefanie Jegelka
Integrating functions on discrete domains into neural networks is key to developing their capability to reason about discrete objects.
Combinatorial Optimization Vocal Bursts Intensity Prediction
no code implementations • ACL 2022 • Taylor Sorensen, Joshua Robinson, Christopher Michael Rytting, Alexander Glenn Shaw, Kyle Jeffrey Rogers, Alexia Pauline Delorey, Mahmoud Khalil, Nancy Fulda, David Wingate
Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks.
2 code implementations • 25 Feb 2022 • Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, Stefanie Jegelka
We introduce SignNet and BasisNet -- new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if $v$ is an eigenvector then so is $-v$; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors.
Ranked #10 on Graph Regression on ZINC-500k
1 code implementation • NeurIPS 2021 • Joshua Robinson, Li Sun, Ke Yu, Kayhan Batmanghelich, Stefanie Jegelka, Suvrit Sra
However, we observe that the contrastive loss does not always sufficiently guide which features are extracted, a behavior that can negatively impact the performance on downstream tasks via "shortcuts", i. e., by inadvertently suppressing important predictive features.
1 code implementation • 30 May 2021 • Niharika Shimona D'Souza, Mary Beth Nebel, Deana Crocetti, Nicholas Wymbs, Joshua Robinson, Stewart Mostofsky, Archana Venkataraman
We propose a novel matrix autoencoder to map functional connectomes from resting state fMRI (rs-fMRI) to structural connectomes from Diffusion Tensor Imaging (DTI), as guided by subject-level phenotypic measures.
1 code implementation • ICLR 2021 • Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, Stefanie Jegelka
How can you sample good negative examples for contrastive learning?
no code implementations • 27 Aug 2020 • Niharika Shimona D'Souza, Mary Beth Nebel, Deana Crocetti, Nicholas Wymbs, Joshua Robinson, Stewart H. Mostofsky, Archana Venkataraman
The generative component is a structurally-regularized Dynamic Dictionary Learning (sr-DDL) model that decomposes the dynamic rs-fMRI correlation matrices into a collection of shared basis networks and time varying subject-specific loadings.
1 code implementation • 3 Jul 2020 • Niharika Shimona D'Souza, Mary Beth Nebel, Deana Crocetti, Nicholas Wymbs, Joshua Robinson, Stewart Mostofsky, Archana Venkataraman
The generative part of our framework is a structurally-regularized Dynamic Dictionary Learning (sr-DDL) model that decomposes the dynamic rs-fMRI correlation matrices into a collection of shared basis networks and time varying patient-specific loadings.
1 code implementation • NeurIPS 2020 • Ching-Yao Chuang, Joshua Robinson, Lin Yen-Chen, Antonio Torralba, Stefanie Jegelka
A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples.
no code implementations • ICML 2020 • Joshua Robinson, Stefanie Jegelka, Suvrit Sra
Our theoretical results are reflected empirically across a range of tasks and illustrate how weak labels speed up learning on the strong task.
no code implementations • 25 Sep 2019 • Hongzhou Lin, Joshua Robinson, Stefanie Jegelka
We propose a technique termed perceptual regularization that enables both visualization of the latent representation and control over the generality of the learned representation.
1 code implementation • NeurIPS 2019 • Joshua Robinson, Suvrit Sra, Stefanie Jegelka
We propose SLC as the right extension of SR that enables easier, more intuitive control over diversity, illustrating this via examples of practical importance.