Multi-Resolution Feature Fusion for Image Classification of Building Damages with Convolutional Neural Networks

D. Duarte (Corresponding Author), F. Nex, N. Kerle, G. Vosselman

Research output: Contribution to journalArticleAcademicpeer-review

77 Citations (Scopus)
190 Downloads (Pure)


Remote sensing images have long been preferred to perform building damage assessments. The recently proposed methods to extract damaged regions from remote sensing imagery rely on convolutional neural networks (CNN). The common approach is to train a CNN independently considering each of the different resolution levels (satellite, aerial, and terrestrial) in a binary classification approach. In this regard, an ever-growing amount of multi-resolution imagery are being collected, but the current approaches use one single resolution as their input. The use of up/down-sampled images for training has been reported as beneficial for the image classification accuracy both in the computer vision and remote sensing domains. However, it is still unclear if such multi-resolution information can also be captured from images with different spatial resolutions such as imagery of the satellite and airborne (from both manned and unmanned platforms) resolutions. In this paper, three multi-resolution CNN feature fusion approaches are proposed and tested against two baseline (mono-resolution) methods to perform the image classification of building damages. Overall, the results show better accuracy and localization capabilities when fusing multi-resolution feature maps, specifically when these feature maps are merged and consider feature information from the intermediate layers of each of the resolution level networks. Nonetheless, these multi-resolution feature fusion approaches behaved differently considering each level of resolution. In the satellite and aerial (unmanned) cases, the improvements in the accuracy reached 2% while the accuracy improvements for the airborne (manned) case was marginal. The results were further confirmed by testing the approach for geographical transferability, in which the improvements between the baseline and multi-resolution experiments were overall maintained
Original languageEnglish
Article number1636
Pages (from-to)1-26
Number of pages26
JournalRemote sensing
Issue number10
Publication statusPublished - 14 Oct 2018




Dive into the research topics of 'Multi-Resolution Feature Fusion for Image Classification of Building Damages with Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this