no code implementations • 24 Apr 2024 • Weizhi Zhang, Liangwei Yang, Zihe Song, Henry Peng Zou, Ke Xu, Yuanjie Zhu, Philip S. Yu
Graph contrastive learning aims to learn from high-order collaborative filtering signals with unsupervised augmentation on the user-item bipartite graph, which predominantly relies on the multi-task learning framework involving both the pair-wise recommendation loss and the contrastive loss.
1 code implementation • 24 Apr 2024 • Henry Peng Zou, Vinay Samuel, Yue Zhou, Weizhi Zhang, Liancheng Fang, Zihe Song, Philip S. Yu, Cornelia Caragea
To address these limitations, we present ImplicitAVE, the first, publicly available multimodal dataset for implicit attribute value extraction.
no code implementations • 11 Jan 2024 • Liangwei Yang, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, Philip S. Yu
This paper answers a fundamental question in artificial neural network (ANN) design: We do not need to build ANNs layer-by-layer sequentially to guarantee the Directed Acyclic Graph (DAG) property.
no code implementations • 7 Oct 2022 • Simin Chen, Cong Liu, Mirazul Haque, Zihe Song, Wei Yang
Neural Machine Translation (NMT) systems have received much recent attention due to their human-level accuracy.
1 code implementation • CVPR 2022 • Simin Chen, Zihe Song, Mirazul Haque, Cong Liu, Wei Yang
To further understand such efficiency-oriented threats, we propose a new attack approach, NICGSlowDown, to evaluate the efficiency robustness of NICG models.
no code implementations • 29 Sep 2021 • Simin Chen, Mirazul Haque, Zihe Song, Cong Liu, Wei Yang
To further the understanding of such efficiency-oriented threats and raise the community’s concern on the efficiency robustness of NMT systems, we propose a new attack approach, TranSlowDown, to test the efficiency robustness of NMT systems.
no code implementations • 1 Jan 2021 • Simin Chen, Zihe Song, Lei Ma, Cong Liu, Wei Yang
We first theoretically clarify under which condition AttackDist can provide a certified detecting performance, then show that a potential application of AttackDist is distinguishing zero-day adversarial examples without knowing the mechanisms of new attacks.