no code implementations • 12 Feb 2024 • Qiuhao Zeng, Wei Wang, Fan Zhou, Gezheng Xu, Ruizhi Pu, Changjian Shui, Christian Gagne, Shichun Yang, Boyu Wang, Charles X. Ling
By employing Koopman Operators, we effectively address the time-evolving distributions encountered in TDG using the principles of Koopman theory, where measurement functions are sought to establish linear transition relations between evolving domains.
1 code implementation • 26 Nov 2023 • Jiaqi Li, Rui Wang, Yuanhao Lai, Changjian Shui, Sabyasachi Sahoo, Charles X. Ling, Shichun Yang, Boyu Wang, Christian Gagné, Fan Zhou
We conduct extensive experiments on various benchmarks, including a dataset with large-scale tasks, and compare our method against some recent state-of-the-art methods to demonstrate the effectiveness and scalability of our proposed method.
no code implementations • 4 Jul 2023 • Changjian Shui, Justin Szeto, Raghav Mehta, Douglas L. Arnold, Tal Arbel
However, models that are well calibrated overall can still be poorly calibrated for a sub-population, potentially resulting in a clinician unwittingly making poor decisions for this group based on the recommendations of the model.
no code implementations • 6 Mar 2023 • Raghav Mehta, Changjian Shui, Tal Arbel
Unfortunately, recent studies have indeed shown significant biases in DL models across demographic subgroups (e. g., race, sex, age) in the context of medical image analysis, indicating a lack of fairness in the models.
no code implementations • 15 Nov 2022 • Anjun Hu, Jean-Pierre R. Falet, Brennan S. Nichyporuk, Changjian Shui, Douglas L. Arnold, Sotirios A. Tsaftaris, Tal Arbel
We propose a hierarchically structured variational inference model for accurately disentangling observable evidence of disease (e. g. brain lesions or atrophy) from subject-specific anatomy in brain MRIs.
1 code implementation • 19 Oct 2022 • Changjian Shui, Gezheng Xu, Qi Chen, Jiaqi Li, Charles Ling, Tal Arbel, Boyu Wang, Christian Gagné
In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors.
no code implementations • 1 Aug 2022 • Raghav Mehta, Changjian Shui, Brennan Nichyporuk, Tal Arbel
This work presents an information-theoretic active learning framework that guides the optimal selection of images from the unlabelled pool to be labeled based on maximizing the expected information gain (EIG) on an evaluation dataset.
no code implementations • 31 May 2022 • William Wei Wang, Gezheng Xu, Ruizhi Pu, Jiaqi Li, Fan Zhou, Changjian Shui, Charles Ling, Christian Gagné, Boyu Wang
Domain generalization aims to learn a predictive model from multiple different but related source tasks that can generalize well to a target task without the need of accessing any target data.
no code implementations • 26 May 2022 • Changjian Shui, Qi Chen, Jiaqi Li, Boyu Wang, Christian Gagné
We consider a fair representation learning perspective, where optimal predictors, on top of the data representation, are ensured to be invariant with respect to different sub-groups.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton
Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
no code implementations • 29 Sep 2021 • Wei Wang, Jiaqi Li, Ruizhi Pu, Gezheng Xu, Fan Zhou, Changjian Shui, Charles Ling, Boyu Wang
Domain generalization aims to learn a predictive model from multiple different but related source tasks that can generalize well to a target task without the need of accessing any target data.
1 code implementation • NeurIPS 2021 • Qi Chen, Changjian Shui, Mario Marchand
We derive a novel information-theoretic analysis of the generalization property of meta-learning algorithms.
no code implementations • 30 May 2021 • Changjian Shui, Boyu Wang, Christian Gagné
Our regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms for invariant representation learning.
1 code implementation • 9 May 2021 • Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, Boyu Wang
Multi-source domain adaptation aims at leveraging the knowledge from multiple tasks for predicting a related target domain.
no code implementations • 1 Jan 2021 • Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, Boyu Wang
We study the label shift problem in multi-source transfer learning and derive new generic principles to control the target generalization risk.
no code implementations • 7 Nov 2020 • Jun Wen, Changjian Shui, Kun Kuang, Junsong Yuan, Zenan Huang, Zhefeng Gong, Nenggan Zheng
To address this issue, we intervene in the learning of feature discriminability using unlabeled target data to guide it to get rid of the domain-specific part and be safely transferable.
no code implementations • 30 Jul 2020 • Changjian Shui, Qi Chen, Jun Wen, Fan Zhou, Christian Gagné, Boyu Wang
We reveal the incoherence between the widely-adopted empirical domain adversarial training and its generally-assumed theoretical counterpart based on $\mathcal{H}$-divergence.
no code implementations • 21 Jul 2020 • Fan Zhou, Zhuqing Jiang, Changjian Shui, Boyu Wang, Brahim Chaib-Draa
Previous domain generalization approaches mainly focused on learning invariant features and stacking the learned features from each source domain to generalize to a new target domain while ignoring the label information, which will lead to indistinguishable features with an ambiguous classification boundary.
no code implementations • 24 May 2020 • Fan Zhou, Changjian Shui, Bincheng Huang, Boyu Wang, Brahim Chaib-Draa
To this end, we introduce a discriminative active learning approach for domain adaptation to reduce the efforts of data annotation.
1 code implementation • 20 Nov 2019 • Changjian Shui, Fan Zhou, Christian Gagné, Boyu Wang
In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning.
1 code implementation • 18 Oct 2019 • Mahdieh Abbasi, Changjian Shui, Arezoo Rajabi, Christian Gagne, Rakesh Bobba
We empirically verify that the most protective OOD sets -- selected according to our metrics -- lead to A-CNNs with significantly lower generalization errors than the A-CNNs trained on the least protective ones.
1 code implementation • 21 Mar 2019 • Changjian Shui, Mahdieh Abbasi, Louis-Émile Robitaille, Boyu Wang, Christian Gagné
Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks.
no code implementations • 26 Oct 2018 • Changjian Shui, Ihsen Hedhli, Christian Gagné
We are providing a theoretical analysis of this algorithm, with a cumulative error upper bound for each task.
no code implementations • 22 Feb 2018 • Changjian Shui, Azadeh Sadat Mozafari, Jonathan Marek, Ihsen Hedhli, Christian Gagné
Calibrating the confidence of supervised learning models is important for a variety of contexts where the certainty over predictions should be reliable.