Learning general representations for semantic segmentation and height estimation from remote sensing images

Wufan Zhao*, C. Persello, Hu Ding, A. Stein

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

38 Downloads (Pure)

Abstract

Deep-learning methods have been applied to remote-sensing imagery to achieve height estimation and semantic segmentation. Recent research has demonstrated that multi-task learning methods can act as a welcome addition to task-specific features by improving prediction accuracy across a range of tasks. How to effectively learn representations that achieve good performance for multiple tasks still remains challenging. We propose to adopt a unified network jointly learning multiple vision tasks' general representations by aligning them via small task-specific adapters. The experimental results on the Vaihingen dataset demonstrate that general representation learning can improve the performance of state-of-the-art methods in height estimation and semantic segmentation tasks.

Original languageEnglish
Title of host publication2023 Joint Urban Remote Sensing Event
PublisherIEEE
Number of pages4
ISBN (Electronic)9781665493734
DOIs
Publication statusPublished - 8 Jun 2023
EventJoint Urban Remote Sensing Event, JURSE 2023 - Heraklion, Greece
Duration: 17 May 202319 May 2023
http://jurse2023.org/

Publication series

Name2023 Joint Urban Remote Sensing Event, JURSE 2023

Conference

ConferenceJoint Urban Remote Sensing Event, JURSE 2023
Abbreviated titleJURSE 2023
Country/TerritoryGreece
CityHeraklion
Period17/05/2319/05/23
Internet address

Keywords

  • height estimation
  • knowledge distillation
  • Multi-task learning
  • remote sensing image
  • semantic segmentation
  • 2023 OA procedure

Fingerprint

Dive into the research topics of 'Learning general representations for semantic segmentation and height estimation from remote sensing images'. Together they form a unique fingerprint.

Cite this