Convolutional Neural Networks Based Remote Sensing Scene Classification under Clear and Cloudy Environments

Remote sensing (RS) scene classification has wide ap- plications in the environmental monitoring and geological survey. In the real-world applications, the RS scene images taken by the satellite might have two scenarios: clear and cloudy environments. However, most of existing methods did not consider these two environments simultaneously. In this paper, we assume that the global and local features are discriminative in either clear or cloudy environments. Many existing Convolution Neural Networks (CNN) based models have made excellent achievements in the image classifica- tion, however they somewhat ignored the global and local features in their network structure. In this paper, we pro- pose a new CNN based network (named GLNet) with the Global Encoder and Local Encoder to extract the discrim- inative global and local features for the RS scene classifi- cation, where the constraints for inter-class dispersion and intra-class compactness are embedded in the GLNet train- ing. The experimental results on two publicized RS scene classification datasets show that the proposed GLNet could achieve better performance based on many existing CNN backbones under both clear and cloudy environments.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Classification RSSCN7 GLNet 1:1 Accuracy 95.07 # 1

Methods