no code implementations • 1 Jun 2024 • Zhongteng Cai, Xueru Zhang, Mohammad Mahdi Khalili
Although various quantization mechanisms were proposed recently to generate discrete outputs under differential privacy, the outcomes are either biased or have an inferior accuracy-privacy trade-off.
no code implementations • 12 May 2024 • Tian Xie, Xueru Zhang
As machine learning (ML) models are increasingly used in social domains to make consequential decisions about humans, they often have the power to reshape data distributions.
no code implementations • 10 May 2024 • Thai-Hoang Pham, Xueru Zhang, Ping Zhang
Domain generalization (DG) deals with such an issue and it aims to learn a model from multiple source domains that can be generalized to unseen target domains.
no code implementations • 3 May 2024 • Tian Xie, Xueru Zhang
Existing results on strategic learning have largely focused on the linear setting where agents with linear labeling functions best respond to a (noisy) linear decision policy.
no code implementations • 3 May 2024 • Tian Xie, Zhiqun Zuo, Mohammad Mahdi Khalili, Xueru Zhang
Machine learning systems have been widely used to make decisions about individuals who may best respond and behave strategically to receive favorable outcomes, e. g., they may genuinely improve the true labels or manipulate observable features directly to game the system without changing labels.
no code implementations • 3 May 2024 • Tian Xie, Xuwei Tan, Xueru Zhang
We also extend the model to settings where 1) agents may be dishonest and game the algorithm into making favorable but erroneous decisions; 2) honest efforts are forgettable and not sufficient to guarantee persistent improvements.
1 code implementation • NeurIPS 2023 • Zhiqun Zuo, Mohammad Mahdi Khalili, Xueru Zhang
It was shown in \cite{kusner2017counterfactual} that a sufficient condition for satisfying CF is to \textbf{not} use features that are descendants of sensitive attributes in the causal graph.
1 code implementation • 7 Nov 2023 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan
Imposing EL on the learning process leads to a non-convex optimization problem even if the loss function is convex, and the existing fair learning algorithms cannot properly be adopted to find the fair predictor under the EL constraint.
no code implementations • 10 Oct 2023 • Tongxin Yin, Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
no code implementations • 8 May 2023 • Kun Jin, Tongxin Yin, Zhongzhu Chen, Zeyu Sun, Xueru Zhang, Yang Liu, Mingyan Liu
We consider a federated learning (FL) system consisting of multiple clients and a server, where the clients aim to collaboratively learn a common decision model from their distributed data.
1 code implementation • 30 Jan 2023 • Thai-Hoang Pham, Xueru Zhang, Ping Zhang
Although many approaches have been proposed to make ML models fair, they typically rely on the assumption that data distributions in training and deployment are identical.
1 code implementation • NeurIPS 2021 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan
This observation implies that the fairness notions used in classification problems are not suitable for a selection problem where the applicants compete for a limited number of positions.
no code implementations • 29 Sep 2021 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Iman Vakilinia
In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex.
1 code implementation • 25 Sep 2021 • Thai-Hoang Pham, Changchang Yin, Laxmi Mehta, Xueru Zhang, Ping Zhang
In particular, MuViTaNet complements patient representation by using a multi-view encoder to effectively extract information by considering clinical data as both sequences of clinical visits and sets of clinical features.
no code implementations • 7 Dec 2020 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Somayeh Sojoudi
In this work, we study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models.
1 code implementation • NeurIPS 2020 • Xueru Zhang, Ruibo Tu, Yang Liu, Mingyan Liu, Hedvig Kjellström, Kun Zhang, Cheng Zhang
Our results show that static fairness constraints can either promote equality or exacerbate disparity depending on the driving factor of qualification transitions and the effect of sensitive attributes on feature distributions.
no code implementations • 14 Jan 2020 • Xueru Zhang, Mingyan Liu
However, in practice most decision-making processes are of a sequential nature, where decisions made in the past may have an impact on future data.
no code implementations • 8 Oct 2019 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
It can be shown that the privacy-accuracy tradeoff can be improved significantly compared with conventional ADMM.
no code implementations • NeurIPS 2019 • Xueru Zhang, Mohammad Mahdi Khalili, Cem Tekin, Mingyan Liu
Machine Learning (ML) models trained on data from multiple demographic groups can inherit representation disparity (Hashimoto et al., 2018) that may exist in the data: the model may be less favorable to groups contributing less to the training process; this in turn can degrade population retention in these groups over time, and exacerbate representation disparity in the long run.
no code implementations • 7 Oct 2018 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems.
no code implementations • ICML 2018 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Alternating direction method of multiplier (ADMM) is a popular method used to design distributed versions of a machine learning algorithm, whereby local computations are performed on local data with the output exchanged among neighbors in an iterative fashion.