Description
First responders and recovery planners need accurate and quickly derived information about the status of buildings as well as newly built ones to both help victims and to make decisions for reconstruction processes after a disaster. Deep learning and, in particular, convolutional neural network (CNN)-based approaches have recently become state-of-the-art methods to extract information from remote sensing images, in particular for image-based structural damage assessment. However, they are predominantly based on manually extracted training samples. In the present study, we use pre-disaster OpenStreetMap building data to automatically generate training samples to train the proposed deep learning approach after the co-registration of the map and the satellite images. The proposed deep learning framework is based on the U-net design with residual connections, which has been shown to be an effective method to increase the efficiency of CNN-based models. The ResUnet is followed by a Conditional Random Field (CRF) implementation to further refine the results. Experimental analysis was carried out on selected very high resolution (VHR) satellite images representing various scenarios after the 2013 Super Typhoon Haiyan in both the damage and the recovery phases in Tacloban, the Philippines. The results show the robustness of the proposed ResUnet-CRF framework in updating the building map after a disaster for both damage and recovery situations by producing an overall F1-score of 84.2%.
Date made available | 7 Jan 2020 |
---|---|
Publisher | DATA Archiving and Networked Services (DANS) |
Date of data production | 8 Jul 2019 |