Improving land cover segmentation across satellites using domain adaptation

25 Nov 2019  ·  Nadir Bengana, Janne Heikkilä ·

Land use and land cover mapping are essential to various fields of study, including forestry, agriculture, and urban management. Using earth observation satellites both facilitate and accelerate the task. Lately, deep learning methods have proven to be excellent at automating the mapping via semantic image segmentation. However, because deep neural networks require large amounts of labeled data, it is not easy to exploit the full potential of satellite imagery. Additionally, the land cover tends to differ in appearance from one region to another; therefore, having labeled data from one location does not necessarily help in mapping others. Furthermore, satellite images come in various multispectral bands (the bands could range from RGB to over twelve bands). In this paper, we aim at using domain adaptation to solve the aforementioned problems. We applied a well-performing domain adaptation approach on datasets we have built using RGB images from Sentinel-2, WorldView-2, and Pleiades-1 satellites with Corine Land Cover as ground-truth labels. We have also used the DeepGlobe land cover dataset. Experiments show a significant improvement over results obtained without the use of domain adaptation. In some cases, an improvement of over 20% MIoU. At times it even manages to correct errors in the ground-truth labels.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here