1 code implementation • 29 Apr 2024 • Jun Yu, Yutong Dai, Xiaokang Liu, Jin Huang, Yishan Shen, Ke Zhang, Rong Zhou, Eashan Adhikarla, Wenxuan Ye, Yixin Liu, Zhaoming Kong, Kai Zhang, Yilong Yin, Vinod Namboodiri, Brian D. Davison, Jason H. Moore, Yong Chen
Overall, we hope this survey provides the research community with a comprehensive overview of the advancements in MTL from its inception in 1997 to the present in 2023.
no code implementations • 3 Dec 2023 • Eashan Adhikarla, Kai Zhang, Jun Yu, Lichao Sun, John Nicholson, Brian D. Davison
As a result, it raises concerns about the overall robustness of the machine learning techniques for computer vision applications that are deployed publicly for consumers.
no code implementations • 24 Oct 2023 • Youshan Zhang, Brian D. Davison
While unsupervised domain adaptation has been explored to leverage the knowledge from a labeled source domain to an unlabeled target domain, existing methods focus on the distribution alignment between two domains.
no code implementations • 17 May 2023 • Dan Luo, Lixin Zou, Qingyao Ai, Zhiyu Chen, Chenliang Li, Dawei Yin, Brian D. Davison
The goal of unbiased learning to rank (ULTR) is to leverage implicit user feedback for optimizing learning-to-rank systems.
1 code implementation • 24 Jul 2022 • Dan Luo, Lixin Zou, Qingyao Ai, Zhiyu Chen, Dawei Yin, Brian D. Davison
Existing methods in unbiased learning to rank typically rely on click modeling or inverse propensity weighting (IPW).
1 code implementation • 27 Mar 2022 • Mohamed Trabelsi, Zhiyu Chen, Shuo Zhang, Brian D. Davison, Jeff Heflin
In this paper, we propose StruBERT, a structure-aware BERT model that fuses the textual and structural information of a data table to produce context-aware representations for both textual and tabular content of a data table.
1 code implementation • 5 Feb 2022 • Eashan Adhikarla, Dan Luo, Brian D. Davison
Ideally, a robust classifier would be immune to small variations in input images, and a number of defensive approaches have been created as a result.
1 code implementation • 3 Nov 2021 • Youshan Zhang, Brian D. Davison
Unsupervised domain adaptation leverages rich information from a labeled source domain to model an unlabeled target domain.
no code implementations • 22 Jun 2021 • Youshan Zhang, Brian D. Davison
The reconstructed features are also not sufficiently used during training.
no code implementations • 22 Jun 2021 • Youshan Zhang, Brian D. Davison, Vivien W. Talghader, Zhiyu Chen, Zhiyong Xiao, Gary J. Kunkel
To further improve segmentation results, we are the first to propose a post-processing layer to remove irrelevant portions in the segmentation result.
no code implementations • 18 May 2021 • Youshan Zhang, Brian D. Davison
To address these issues, we propose a novel approach called correlated adversarial joint discrepancy adaptation network (CAJNet), which minimizes the joint discrepancy of two domains and achieves competitive performance with tuning parameters using the correlated label.
no code implementations • 5 May 2021 • Youshan Zhang, Brian D. Davison
To align the conditional distributions, we further develop an easy-to-hard pseudo label refinement process to improve the quality of the pseudo labels and then reduce categorical spherical manifold Gaussian kernel geodesic loss.
1 code implementation • 5 May 2021 • Zhiyu Chen, Shuo Zhang, Brian D. Davison
We describe the development, characteristics and availability of a test collection for the task of Web table retrieval, which uses a large-scale Web Table Corpora extracted from the Common Crawl.
1 code implementation • 27 Apr 2021 • Youshan Zhang, Brian D. Davison
In this paper, we show how to efficiently opt for the best pre-trained features from seventeen well-known ImageNet models in unsupervised DA problems.
1 code implementation • 10 Mar 2021 • Youshan Zhang, Brian D. Davison
In this paper, we propose an adversarial regression learning network (ARLNet) for bone age estimation.
no code implementations • 23 Feb 2021 • Mohamed Trabelsi, Zhiyu Chen, Brian D. Davison, Jeff Heflin
A variety of deep learning models have been proposed, and each model presents a set of neural network components to extract features that are used for ranking.
no code implementations • 19 Sep 2020 • Youshan Zhang, Brian D. Davison
Adversarial learning loss can maintain domain-invariant features between the source and target domains.
2 code implementations • 5 Jul 2020 • Hui Ye, Zhiyu Chen, Da-Han Wang, Brian D. Davison
Extreme multi-label text classification (XMTC) is a task for tagging a given text with the most relevant labels from an extremely large label set.
1 code implementation • 19 May 2020 • Zhiyu Chen, Mohamed Trabelsi, Jeff Heflin, Yinan Xu, Brian D. Davison
Pretrained contextualized language models such as BERT have achieved impressive results on various natural language processing benchmarks.
1 code implementation • 6 Feb 2020 • Youshan Zhang, Brian D. Davison
We extract features from sixteen distinct pre-trained ImageNet models and examine the performance of twelve benchmarking methods when using the features.
no code implementations • 27 Jan 2020 • Zhiyu Chen, Haiyan Jia, Jeff Heflin, Brian D. Davison
We incorporate the generated schema labels into a mixed ranking model which not only considers the relevance between the query and dataset metadata but also the similarity between the query and generated schema labels.
2 code implementations • 4 Apr 2019 • Youshan Zhang, Brian D. Davison
Deep neural networks have been widely used in computer vision.
Ranked #14 on Domain Adaptation on Office-31
no code implementations • 10 Nov 2018 • Ovidiu Dan, Vaibhav Parikh, Brian D. Davison
We propose a systematic approach to use publicly accessible reverse DNS hostnames for geolocating IP addresses.