1 code implementation • 13 Mar 2024 • Xiaojun Xu, Yuanshun Yao, Yang Liu
While prior works focus on token-level watermark that embeds signals into the output, we design a model-level watermark that embeds signals into the LLM weights, and such signals can be detected by a paired detector.
no code implementations • 13 Feb 2024 • Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Xiaojun Xu, Yuguang Yao, Hang Li, Kush R. Varshney, Mohit Bansal, Sanmi Koyejo, Yang Liu
We explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning.
no code implementations • 12 Feb 2024 • Dinuka Sahabandu, Xiaojun Xu, Arezoo Rajabi, Luyao Niu, Bhaskar Ramasubramanian, Bo Li, Radha Poovendran
We propose and analyze an adaptive adversary that can retrain a Trojaned DNN and is also aware of SOTA output-based Trojaned model detectors.
no code implementations • 29 Dec 2023 • Xiaohua Lu, Liangxu Xie, Lei Xu, Rongzhi Mao, Shan Chang, Xiaojun Xu
The advantage of the multimodal model lies in its ability to process diverse sources of data using proper models and suitable fusion methods, which would enhance the noise resistance of the model while obtaining data diversity.
no code implementations • 18 Oct 2023 • Qinbin Li, Chulin Xie, Xiaojun Xu, Xiaoyuan Liu, Ce Zhang, Bo Li, Bingsheng He, Dawn Song
To address this, we propose HybridTree, a novel federated learning approach that enables federated tree learning on hybrid data.
1 code implementation • 14 Oct 2023 • Yuanshun Yao, Xiaojun Xu, Yang Liu
To the best of our knowledge, our work is among the first to explore LLM unlearning.
no code implementations • 27 Dec 2022 • Xiaojun Xu, Yue Yu, Hanzhang Wang, Alok Lal, Carl A. Gunter, Bo Li
In this paper, we propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation.
1 code implementation • 20 Oct 2022 • Xiaojun Xu, Linyi Li, Bo Li
On the other hand, as existing works show that semi-supervised training helps improve empirical robustness, we aim to bridge the gap and prove that semi-supervised learning also improves the certified robustness of Lipschitz-bounded models.
1 code implementation • 21 Jul 2022 • Xiaoyuan Liu, Tianneng Shi, Chulin Xie, Qinbin Li, Kangping Hu, Haoyu Kim, Xiaojun Xu, The-Anh Vu-Le, Zhen Huang, Arash Nourian, Bo Li, Dawn Song
The platform streamlines the end-to-end workflow for distributed experimentation and deployment, encompassing 11 popular open-source FL frameworks.
no code implementations • 3 Feb 2022 • Xiaojun Xu, Jacky Yibo Zhang, Evelyn Ma, Danny Son, Oluwasanmi Koyejo, Bo Li
We propose a general theoretical framework proving that factors involving the model function class regularization are sufficient conditions for relative domain transferability.
no code implementations • ICLR 2022 • Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li
Thus, to explore the conditions that guarantee to provide certifiably robust ensemble ML models, we first prove that diversified gradient and large confidence margin are sufficient and necessary conditions for certifiably robust ensemble models under the model-smoothness assumption.
no code implementations • NeurIPS 2021 • Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin I. P. Rubinstein, Ce Zhang, Bo Li
To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness.
1 code implementation • NeurIPS 2021 • Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Benjamin Rubinstein, Pan Zhou, Ce Zhang, Bo Li
To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness.
1 code implementation • 25 Feb 2021 • Huichen Li, Linyi Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li
We aim to bridge the gap between the two by investigating how to efficiently estimate gradient based on a projected low-dimensional space.
no code implementations • CVPR 2020 • Huichen Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li
Such adversarial attacks can be achieved by adding a small magnitude of perturbation to the input to mislead model prediction.
1 code implementation • 19 Mar 2020 • Maurice Weber, Xiaojun Xu, Bojan Karlaš, Ce Zhang, Bo Li
In addition, we theoretically show that it is possible to train the robust smoothed models efficiently for simple models such as K-nearest neighbor classifiers, and we propose an exact smooth-training algorithm that eliminates the need to sample from a noise distribution for such models.
1 code implementation • 27 Feb 2020 • Linyi Li, Maurice Weber, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li
Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.
1 code implementation • 8 Oct 2019 • Xiaojun Xu, Qi. Wang, Huichen Li, Nikita Borisov, Carl A. Gunter, Bo Li
To train the meta-model without knowledge of the attack strategy, we introduce a technique called jumbo learning that samples a set of Trojaned models following a general distribution.
no code implementations • 27 Sep 2018 • Xiaojun Xu, Yue Yu, Bo Li, Le Song, Chengfeng Liu, Carl Gunter
Extensive experiments are conducted to show that the proposed detection mechanism can achieve AUC above 90% against the two attack strategies on both Cora and Citeseer datasets.
no code implementations • 30 Nov 2017 • Rui Luo, Wei-Nan Zhang, Xiaojun Xu, Jun Wang
In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance.
13 code implementations • ICLR 2018 • Xiaojun Xu, Chang Liu, Dawn Song
Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations.
no code implementations • CVPR 2018 • Xiaojun Xu, Xinyun Chen, Chang Liu, Anna Rohrbach, Trevor Darrell, Dawn Song
Our work sheds new light on understanding adversarial attacks on vision systems which have a language component and shows that attention, bounding box localization, and compositional internal structures are vulnerable to adversarial attacks.
1 code implementation • 22 Aug 2017 • Xiaojun Xu, Chang Liu, Qian Feng, Heng Yin, Le Song, Dawn Song
The problem of cross-platform binary code similarity detection aims at detecting whether two binary functions coming from different platforms are similar or not.