TY - JOUR
T1 - A Novel Boundary Loss Function in Deep Convolutional Networks to Improve the Buildings Extraction From High-Resolution Remote Sensing Images
AU - Hosseinpour, Hamidreza
AU - Samadzadegan, Farhad
AU - Dadrass Javan, Farzaneh
PY - 2022/6/6
Y1 - 2022/6/6
N2 - In recent years, there has been a significant increase in the production of high-resolution aerial and satellite imagery. Analyzing and extracting urban features from these images, especially building components, is a significant challenge in photogrammetry and remote sensing. Meanwhile, deep convolution neural networks have been used as a powerful model in the semantic segmentation of building features. However, due to the structure of these types of deep convolution networks, accurate retrieval of building boundary information during training will be difficult, and this will lead to ambiguous boundary areas. On the other hand, the use of distributed-based loss functions, such as the binary cross-entropy loss alone cannot improve the segmentation accuracy in the boundary areas of building features. Accordingly, in this article, a derivative boundary loss function is introduced to optimize and enhance the extraction of buildings in border areas. To calculate the proposed boundary loss function, distance transform image (DTI) is obtained from ground-truth images and predicted images, that derived from the segmentation network. In this article, a derivative method for calculating DTI is presented. Another advantage of the proposed loss function is its ability to be used in a wide range of deep convolution segmentation networks. The proposed method was investigated and tested using the ISPRS Potsdam and Vaihingen datasets. The results of this implementation show the superiority in the criteria of evaluation and improvement of buildings extraction if the proposed boundary loss function is used along with the distributed-based loss function.
AB - In recent years, there has been a significant increase in the production of high-resolution aerial and satellite imagery. Analyzing and extracting urban features from these images, especially building components, is a significant challenge in photogrammetry and remote sensing. Meanwhile, deep convolution neural networks have been used as a powerful model in the semantic segmentation of building features. However, due to the structure of these types of deep convolution networks, accurate retrieval of building boundary information during training will be difficult, and this will lead to ambiguous boundary areas. On the other hand, the use of distributed-based loss functions, such as the binary cross-entropy loss alone cannot improve the segmentation accuracy in the boundary areas of building features. Accordingly, in this article, a derivative boundary loss function is introduced to optimize and enhance the extraction of buildings in border areas. To calculate the proposed boundary loss function, distance transform image (DTI) is obtained from ground-truth images and predicted images, that derived from the segmentation network. In this article, a derivative method for calculating DTI is presented. Another advantage of the proposed loss function is its ability to be used in a wide range of deep convolution segmentation networks. The proposed method was investigated and tested using the ISPRS Potsdam and Vaihingen datasets. The results of this implementation show the superiority in the criteria of evaluation and improvement of buildings extraction if the proposed boundary loss function is used along with the distributed-based loss function.
U2 - 10.1109/jstars.2022.3178470
DO - 10.1109/jstars.2022.3178470
M3 - Article
SN - 1939-1404
VL - 15
SP - 4437
EP - 4454
JO - IEEE Journal of selected topics in applied earth observations and remote sensing
JF - IEEE Journal of selected topics in applied earth observations and remote sensing
ER -