The Worrisome Impact of an Inter-rater Bias on Neural Network Training

12 Jun 2019  ·  Or Shwartzman, Harel Gazit, Ilan Shelef, Tammy Riklin-Raviv ·

The problem of inter-rater variability is often discussed in the context of manual labeling of medical images. The emergence of data-driven approaches such as Deep Neural Networks (DNNs) brought this issue of raters' disagreement to the front-stage. In this paper, we highlight the issue of inter-rater bias as opposed to random inter-observer variability and demonstrate its influence on DNN training, leading to different segmentation results for the same input images. In fact, lower overlap scores are obtained between the outputs of a DNN trained on annotations of one rater and tested on another. Moreover, we demonstrate that inter-rater bias in the training examples is amplified and becomes more consistent, considering the segmentation predictions of the DNNs' test data. We support our findings by showing that a classifier-DNN trained to distinguish between raters based on their manual annotations performs better when the automatic segmentation predictions rather than the actual raters' annotations were tested. For this study, we used two different datasets: the ISBI 2015 Multiple Sclerosis (MS) challenge dataset, including MRI scans each with annotations provided by two raters with different levels of expertise; and Intracerebral Hemorrhage (ICH) CT scans with manual and semi-manual segmentations. The results obtained allow us to underline a worrisome clinical implication of a DNN bias induced by an inter-rater bias during training. Specifically, we present a consistent underestimate of MS-lesion loads when calculated from segmentation predictions of a DNN trained on input provided by the less experienced rater. In the same manner, the differences in ICH volumes calculated based on outputs of identical DNNs, each trained on annotations from a different source are more consistent and larger than the differences in volumes between the manual and semi-manual annotations used for training.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here