1 code implementation • 2 May 2024 • Akshay Mehra, Yunbei Zhang, Jihun Hamm
Concretely, our metric characterizes the model's performance on unseen domains using only a small amount of unlabeled data from these domains and data or statistics from the training (source) domain(s).
1 code implementation • 17 Jul 2023 • Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, Jihun Hamm
To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.
no code implementations • 6 Jul 2023 • Janet Wang, Yunbei Zhang, Zhengming Ding, Jihun Hamm
The adoption of UDA with multiple sources can simultaneously enrich the training set and bridge the domain gap between different skin lesion datasets, which vary due to distinct acquisition protocols.
no code implementations • 3 Jul 2023 • Akshay Mehra, Yunbei Zhang, Jihun Hamm
We propose a novel Task Transfer Analysis approach that transforms the source distribution (and classifier) by changing the class prior distribution, label, and feature spaces to produce a new source distribution (and classifier) and allows us to relate the loss of the downstream task (i. e., transferability) to that of the source task.