no code implementations • 21 Apr 2024 • Yuxuan Zhu, Jiachen Liu, Mosharaf Chowdhury, Fan Lai
Federated learning (FL) aims to train machine learning (ML) models across potentially millions of edge client devices.
no code implementations • 13 Feb 2024 • Xuexin Chen, Ruichu Cai, Zhengting Huang, Yuxuan Zhu, Julien Horwood, Zhifeng Hao, Zijian Li, Jose Miguel Hernandez-Lobato
We investigate the problem of explainability for machine learning models, focusing on Feature Attribution Methods (FAMs) that evaluate feature importance through perturbation tests.
1 code implementation • 21 Dec 2023 • Ruichu Cai, Yuxuan Zhu, Jie Qiao, Zefeng Liang, Furui Liu, Zhifeng Hao
By considering the underappreciated causal generating process, first, we pinpoint the source of the vulnerability of DNNs via the lens of causality, then give theoretical results to answer \emph{where to attack}.
no code implementations • 14 Dec 2022 • Ruichu Cai, Yuxuan Zhu, Xuexin Chen, Yuan Fang, Min Wu, Jie Qiao, Zhifeng Hao
To address the non-identifiability of PNS, we resort to a lower bound of PNS that can be optimized via counterfactual estimation, and propose a framework of Necessary and Sufficient Explanation for GNN (NSEG) via optimizing that lower bound.
no code implementations • 13 Oct 2022 • Zhong Li, Yuxuan Zhu, Matthijs van Leeuwen
In the past two decades, most research on anomaly detection has focused on improving the accuracy of the detection, while largely ignoring the explainability of the corresponding methods and thus leaving the explanation of outcomes to practitioners.