A deep 2D/3D Feature-Level fusion for classification of UAV multispectral imagery in urban areas

Hossein Pourazar, Farhad Samadzadegan*, F. Dadrass Javan

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

In this paper, a deep convolutional neural network (CNN) is developed to classify the Unmanned Aerial Vehicle (UAV) derived multispectral imagery and normalized digital surface model (DSM) data in urban areas. For this purpose, a multi-input deep CNN (MIDCNN) architecture is designed using 11 parallel CNNs; 10 deep CNNs to extract the features from all possible triple combinations of spectral bands as well as one deep CNN dedicated to the normalized DSM data. The proposed method is compared with the traditional single-input (SI) and double-input (DI) deep CNN designations and random forest (RF) classifier, and evaluated using two independent test datasets. The results indicate that increasing the CNN layers parallelly augmented the classifier’s generalization and reduced overfitting risk. The overall accuracy and kappa value of the proposed method are 95% and 0.93, respectively, for the first test dataset, and 96% and 0.94, respectively, for the second test data set.
Original languageEnglish
Pages (from-to)1-18
Number of pages18
JournalGeocarto international
DOIs
Publication statusE-pub ahead of print/First online - 4 Aug 2021

Keywords

  • ITC-ISI-JOURNAL-ARTICLE
  • UT-Hybrid-D

Fingerprint

Dive into the research topics of 'A deep 2D/3D Feature-Level fusion for classification of UAV multispectral imagery in urban areas'. Together they form a unique fingerprint.

Cite this