A Multitask Deep Learning Model for Parsing Bridge Elements and Segmenting Defect in Bridge Inspection Images

6 Sep 2022  ·  Chenyu Zhang, Muhammad Monjurul Karim, Ruwen Qin ·

The vast network of bridges in the United States raises a high requirement for maintenance and rehabilitation. The massive cost of manual visual inspection to assess bridge conditions is a burden to some extent. Advanced robots have been leveraged to automate inspection data collection. Automating the segmentations of multiclass elements and surface defects on the elements in the large volume of inspection image data would facilitate an efficient and effective assessment of the bridge condition. Training separate single-task networks for element parsing (i.e., semantic segmentation of multiclass elements) and defect segmentation fails to incorporate the close connection between these two tasks. Both recognizable structural elements and apparent surface defects are present in the inspection images. This paper is motivated to develop a multitask deep learning model that fully utilizes such interdependence between bridge elements and defects to boost the model's task performance and generalization. Furthermore, the study investigated the effectiveness of the proposed model designs for improving task performance, including feature decomposition, cross-talk sharing, and multi-objective loss function. A dataset with pixel-level labels of bridge elements and corrosion was developed for model training and testing. Quantitative and qualitative results from evaluating the developed multitask deep model demonstrate its advantages over the single-task-based model not only in performance (2.59% higher mIoU on bridge parsing and 1.65% on corrosion segmentation) but also in computational time and implementation capability.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here