Applications of Unsupervised Deep Transfer Learning to Intelligent Fault Diagnosis: A Survey and Comparative Study

28 Dec 2019  ·  Zhibin Zhao, Qiyang Zhang, Xiaolei Yu, Chuang Sun, Shibin Wang, Ruqiang Yan, Xuefeng Chen ·

Recent progress on intelligent fault diagnosis (IFD) has greatly depended on deep representation learning and plenty of labeled data. However, machines often operate with various working conditions or the target task has different distributions with the collected data used for training (the domain shift problem). Besides, the newly collected test data in the target domain are usually unlabeled, leading to unsupervised deep transfer learning based (UDTL-based) IFD problem. Although it has achieved huge development, a standard and open source code framework as well as a comparative study for UDTL-based IFD are not yet established. In this paper, we construct a new taxonomy and perform a comprehensive review of UDTL-based IFD according to different tasks. Comparative analysis of some typical methods and datasets reveals some open and essential issues in UDTL-based IFD which are rarely studied, including transferability of features, influence of backbones, negative transfer, physical priors, etc. To emphasize the importance and reproducibility of UDTL-based IFD, the whole test framework will be released to the research community to facilitate future research. In summary, the released framework and comparative study can serve as an extended interface and basic results to carry out new studies on UDTL-based IFD. The code framework is available at \url{https://github.com/ZhaoZhibin/UDTL}.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods