Cross-domain road detection based on global-local adversarial learning framework from very high resolution satellite imagery

Road detection based on convolutional neural networks (CNNs) has achieved remarkable performances for very high resolution (VHR) remote sensing images. However, this approach relies on massive annotated samples, and the problem of limited generalization for unseen images still remains. The manual pixel-level labeling process is also extremely time-consuming, and the performance of CNNs degrades significantly when there is a domain gap between the training and test images. In this paper, to address this problem, a global-local adversarial learning (GOAL) framework is proposed for cross-domain road detection. On the one hand, considering the spatial information similarities between the source and target domains, feature space driven adversarial learning is applied to explore the shared features across domains. On the other hand, the complex background of VHR remote sensing images, such as the occlusions and shadows of trees and buildings, makes some roads easy to recognize, while others are much more difficult. However, the traditional global adversarial learning approach cannot guarantee local semantic consistency. Therefore, a local alignment operation is introduced, which adaptively adjusts the weight of the adversarial loss according to the road recognition difficulty. Extensive experiments were conducted on different road datasets, including two public competition road datasetsā€”SpaceNet and DeepGlobeā€”and our own large-scale annotated images from four cities: Boston, Birmingham, Shanghai, and Wuhan. The experimental results show that the proposed GOAL framework can clearly improve the cross-domain road detection performance, without any annotation of the target domain images. For instance, taking SpaceNet road dataset as the source domain, compared with the no adaptation method, the IOU performance of GOAL framework is increased by 14.36%, 5.49%, 4.51%, 5.63% and 15.14% on DeepGlobe, Boston, Birmingham, Shanghai, and Wuhan images, respectively, which demonstrates its strong generalization capability.

Graphical Abstract of proposed framework

How to cite

Lu X, Zhong Y, Zheng Z, et al. Cross-domain road detection based on global-local adversarial learning framework from very high resolution satellite imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 180: 296-312.

en_USEnglish