Open-Source Data-Driven Cross-Domain Road Detection From Very High Resolution Remote Sensing Imagery

High-precision road detection from very high resolution (VHR) remote sensing images has broad application value. However, the most advanced deep learning based methods often fail to identify roads when there is a distribution discrepancy between the training samples and test samples, due to their limited generalization ability. In this paper, to address this problem, an open-source data-driven domain-specific representation (OSM-DOER) framework is proposed for cross-domain road detection. On the one hand, as the spatial structure information of the source and target domains is similar, but the texture information is different, the domain-specific representation (DOER) framework is proposed, which not only aligns the distributions of the spatial structure information, but also learns the domain-specific texture information. Furthermore, in order to enhance the representation of the target domain data distribution, open-source and freely available OpenStreetMap (OSM) road centerline data are utilized to generate target domain samples, which are then used in the network training as the supervised information for the target domain. Finally, to verify the superiority of the proposed OSM-DOER framework, we conducted extensive experiments with the public SpaceNet and DeepGlobe road datasets, and large-scale road datasets from Birmingham in the UK and Shanghai in China. The experimental results demonstrate that the proposed OSM-DOER framework shows obvious advantages over the mainstream road detection methods, and the use of OSM road centerline data has great potential for the road detection task.

Graphical Abstract of proposed framework

How to cite

Lu X, Zhong Y, Zhang L. Open-Source Data-Driven Cross-Domain Road Detection From Very High Resolution Remote Sensing Imagery[J]. IEEE Transactions on Image Processing, 2022, 31: 6847-6862.