1 code implementation • 24 Apr 2024 • Haochen Sun, Jason Li, Hongyang Zhang
The recent surge in artificial intelligence (AI), characterized by the prominence of large language models (LLMs), has ushered in fundamental transformations across the globe.
1 code implementation • 16 Apr 2024 • Siqiao Xue, Danrui Qi, Caigao Jiang, Wenhui Shi, Fangyin Cheng, Keting Chen, Zhiping Zhang, Jianshan He, Hongyang Zhang, Ganglin Wei, Wang Zhao, Fan Zhou, Hong Yi, Shaodong Liu, Hongjun Yang, Faqiang Chen
The recent breakthroughs in large language models (LLMs) are positioned to transition many areas of software.
1 code implementation • 6 Feb 2024 • Yu Du, Fangyun Wei, Hongyang Zhang
We also revisit the evaluation protocol introduced by previous works and identify a limitation in this protocol that leads to an artificially high pass rate.
1 code implementation • 26 Jan 2024 • Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang
In this paper, we reconsider speculative sampling and derive two key observations.
no code implementations • 11 Oct 2023 • Yihan Wu, Zhengmian Hu, Hongyang Zhang, Heng Huang
Watermarking techniques offer a promising way to secure data via embedding covert information into the data.
1 code implementation • 13 Sep 2023 • Yuhui Li, Fangyun Wei, Jinjing Zhao, Chao Zhang, Hongyang Zhang
We discover that by integrating self-evaluation and rewind mechanisms, unaligned LLMs can directly produce responses consistent with human preferences via self-boosting.
1 code implementation • 30 Jul 2023 • Haochen Sun, Tonghe Bai, Jason Li, Hongyang Zhang
In response to this challenge, we present zero-knowledge deep learning (zkDL), an efficient zero-knowledge proof for deep learning training.
no code implementations • 24 Jul 2023 • Yimu Wang, Peng Shi, Hongyang Zhang
Furthermore, to show the transferability of obstinate word substitutions found by GradObstinate, we replace the words in four representative NLP benchmarks with their obstinate substitutions.
no code implementations • CVPR 2023 • Yimu Wang, Dinghuai Zhang, Yihan Wu, Heng Huang, Hongyang Zhang
We identify a phenomenon named player domination in the bargaining game, namely that the existing max-based approaches, such as MAX and MSD, do not converge.
1 code implementation • 28 Nov 2022 • Yuzheng Hu, Fan Wu, Hongyang Zhang, Han Zhao
More specifically, we demonstrate that while the constraint of adversarial robustness consistently degrades the standard accuracy in the balanced class setting, the class imbalance ratio plays a fundamentally different role in accuracy disparity compared to the Gaussian case, due to the heavy tail of the stable distribution.
1 code implementation • 26 Nov 2022 • Yuhui Li, Zejia Wu, Chao Zhang, Hongyang Zhang
In this work, we introduce the concepts of direct and indirect effects from causal inference to the domain generalization problem.
1 code implementation • 19 Nov 2022 • Yihan Wu, Xinda Li, Florian Kerschbaum, Heng Huang, Hongyang Zhang
In this paper, we study the problem of learning a robust dataset such that any classifier naturally trained on the dataset is adversarially robust.
no code implementations • 23 Oct 2022 • Maria-Florina Balcan, Rattana Pukdee, Pradeep Ravikumar, Hongyang Zhang
Adversarial training is a standard technique for training adversarially robust models.
no code implementations • 5 Oct 2022 • Luke Rowe, Benjamin Thérien, Krzysztof Czarnecki, Hongyang Zhang
In adversarial machine learning, the popular $\ell_\infty$ threat model has been the focus of much previous work.
no code implementations • 17 Jun 2022 • Yihan Wu, Hongyang Zhang, Heng Huang
The challenge is to design a provably robust algorithm that takes into consideration the 1-NN search and the high-dimensional nature of the embedding space.
1 code implementation • 10 Jun 2022 • Xinyi Wang, Michael Saxon, Jiachen Li, Hongyang Zhang, Kun Zhang, William Yang Wang
While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations.
1 code implementation • 7 Jun 2022 • Dinghuai Zhang, Hongyang Zhang, Aaron Courville, Yoshua Bengio, Pradeep Ravikumar, Arun Sai Suggala
Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks.
1 code implementation • 19 May 2022 • Minghan Li, Xinyu Zhang, Ji Xin, Hongyang Zhang, Jimmy Lin
For example, on MS MARCO Passage v1, our method yields an average candidate set size of 27 out of 1, 000 which increases the reranking speed by about 37 times, while the MRR@10 is greater than a pre-specified value of 0. 38 with about 90% empirical coverage and the empirical baselines fail to provide such guarantee.
no code implementations • 23 Feb 2022 • Yihan Wu, Heng Huang, Hongyang Zhang
We prove a Lipschitzness lower bound $\Omega(\sqrt{n/p})$ of the interpolating neural network with $p$ parameters on arbitrary data distributions.
no code implementations • 11 Feb 2022 • Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang
We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners.
1 code implementation • 4 Jan 2022 • Fangcheng Liu, Chao Zhang, Hongyang Zhang
Extensive experiments verify the effectiveness of our framework on balancing imperceptibility and transferability of the crafted adversarial examples.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
no code implementations • ICML Workshop AML 2021 • Fangcheng Liu, Chao Zhang, Hongyang Zhang
In this work, we propose a \emph{geometry-aware framework} to generate transferable adversarial perturbation with minimum norm for each input.
2 code implementations • 21 Jan 2021 • Lang Huang, Chao Zhang, Hongyang Zhang
We propose self-adaptive training -- a unified training algorithm that dynamically calibrates and enhances training processes by model predictions without incurring an extra computational cost -- to advance both supervised and self-supervised learning of deep neural networks.
1 code implementation • 13 Oct 2020 • Maria-Florina Balcan, Avrim Blum, Dravyansh Sharma, Hongyang Zhang
Despite significant advances, deep networks remain highly susceptible to adversarial attack.
1 code implementation • 28 Sep 2020 • Yifei Huang, Yaodong Yu, Hongyang Zhang, Yi Ma, Yuan YAO
Even replacing only the first layer of a ResNet by such a ODE block can exhibit further improvement in robustness, e. g., under PGD-20 ($\ell_\infty=0. 031$) attack on CIFAR-10 dataset, it achieves 91. 57\% and natural accuracy and 62. 35\% robust accuracy, while a counterpart architecture of ResNet trained with TRADES achieves natural and robust accuracy 76. 29\% and 45. 24\%, respectively.
1 code implementation • NeurIPS 2020 • Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Ruslan Salakhutdinov, Kamalika Chaudhuri
Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning.
4 code implementations • NeurIPS 2020 • Lang Huang, Chao Zhang, Hongyang Zhang
We propose self-adaptive training---a new training algorithm that dynamically corrects problematic training labels by model predictions without incurring extra computational cost---to improve generalization of deep learning for potentially corrupted training data.
1 code implementation • 10 Feb 2020 • Avrim Blum, Travis Dick, Naren Manoj, Hongyang Zhang
We show a hardness result for random smoothing to achieve certified adversarial robustness against attacks in the $\ell_p$ ball of radius $\epsilon$ when $p>2$.
no code implementations • NeurIPS 2019 • Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep K. Ravikumar
We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.
no code implementations • ECCV 2020 • Xiao Yang, Fangyun Wei, Hongyang Zhang, Jun Zhu
We consider universal adversarial patches for faces -- small visual elements whose addition to a face image reliably destroys the performance of face detectors.
no code implementations • 30 Oct 2019 • Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep Ravikumar
We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.
no code implementations • NeurIPS 2019 • Zhao Song, Ruosong Wang, Lin F. Yang, Hongyang Zhang, Peilin Zhong
When the loss function is a general symmetric norm, our algorithm produces a $\sqrt{d} \cdot \mathrm{polylog} n \cdot \mathrm{mmc}(\ell)$-approximate solution in input-sparsity time, where $\mathrm{mmc}(\ell)$ is a quantity related to the symmetric norm under consideration.
8 code implementations • 24 Jan 2019 • Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, Michael. I. Jordan
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples.
Ranked #3 on Adversarial Attack on CIFAR-10
no code implementations • ICLR 2019 • Hongyang Zhang, Susu Xu, Jiantao Jiao, Pengtao Xie, Ruslan Salakhutdinov, Eric P. Xing
In this work, we give new results on the benefits of multi-generator architecture of GANs.
no code implementations • 18 Oct 2018 • Maria-Florina Balcan, Yi Li, David P. Woodruff, Hongyang Zhang
This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix.
1 code implementation • 6 Jun 2018 • Hongyang Zhang, Junru Shao, Ruslan Salakhutdinov
We show that one cause for such success is due to the fact that the multi-branch architecture is less non-convex in terms of duality gap.
no code implementations • 26 Dec 2017 • Yuanzhi Li, Tengyu Ma, Hongyang Zhang
We show that the gradient descent algorithm provides an implicit regularization effect in the learning of over-parameterized matrix factorization models and one-hidden-layer neural networks with quadratic activations.
no code implementations • NeurIPS 2017 • Yichong Xu, Hongyang Zhang, Kyle Miller, Aarti Singh, Artur Dubrawski
We study the problem of interactively learning a binary classifier using noisy labeling and pairwise comparison oracles, where the comparison oracle answers which one in the given two instances is more likely to be positive.
no code implementations • ICML 2017 • Maria-Florina Balcan, Travis Dick, YIngyu Liang, Wenlong Mou, Hongyang Zhang
We study the problem of clustering sensitive data while preserving the privacy of individuals represented in the dataset, which has broad applications in practical machine learning and data analysis tasks.
no code implementations • 3 Jul 2017 • Hongyang Zhang, William J. Welch, Ruben H. Zamar
Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems.
no code implementations • 27 Apr 2017 • Maria-Florina Balcan, YIngyu Liang, David P. Woodruff, Hongyang Zhang
This work studies the strong duality of non-convex matrix factorization problems: we show that under certain dual conditions, these problems and its dual have the same optimum.
no code implementations • 19 Apr 2017 • Yichong Xu, Hongyang Zhang, Aarti Singh, Kyle Miller, Artur Dubrawski
We study the problem of interactively learning a binary classifier using noisy labeling and pairwise comparison oracles, where the comparison oracle answers which one in the given two instances is more likely to be positive.
no code implementations • NeurIPS 2017 • Maria-Florina Balcan, Hongyang Zhang
In this work, we introduce new convex geometry tools to study the properties of $s$-concave distributions and use these properties to provide bounds on quantities of interest to learning including the probability of disagreement between two halfspaces, disagreement outside a band, and the disagreement coefficient.
no code implementations • NeurIPS 2016 • Maria-Florina Balcan, Hongyang Zhang
For this problem, we present an algorithm that returns a matrix of a small error, with sample complexity almost as small as the best prior results in the noiseless case.
no code implementations • 25 Jun 2015 • Hongyang Zhang, Zhouchen Lin, Chao Zhang
As an application, we also find that the solutions to extended robust Low-Rank Representation and to our extended robust MC are mutually expressible, so both our theory and algorithm can be applied to the subspace clustering problem with missing values under certain conditions.
no code implementations • 6 Dec 2014 • Hongyang Zhang, Zhouchen Lin, Chao Zhang, Junbin Gao
More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations.
no code implementations • 2 Sep 2014 • Hongyang Zhang, Ruben H. Zamar
There has been a surge in the number of large and flat data sets - data sets containing a large number of features and a relatively small number of observations - due to the growing ability to collect and store information in medical research and other fields.
no code implementations • 23 Apr 2013 • Hongyang Zhang, Zhouchen Lin, Chao Zhang
For several rank minimization problems, such a replacement has been theoretically proven to be valid, i. e., the solution to nuclear norm minimization problem is also the solution to rank minimization problem.