Abstract
Deep-learning methods have been applied to remote-sensing imagery to achieve height estimation and semantic segmentation. Recent research has demonstrated that multi-task learning methods can act as a welcome addition to task-specific features by improving prediction accuracy across a range of tasks. How to effectively learn representations that achieve good performance for multiple tasks still remains challenging. We propose to adopt a unified network jointly learning multiple vision tasks' general representations by aligning them via small task-specific adapters. The experimental results on the Vaihingen dataset demonstrate that general representation learning can improve the performance of state-of-the-art methods in height estimation and semantic segmentation tasks.
Original language | English |
---|---|
Title of host publication | 2023 Joint Urban Remote Sensing Event |
Publisher | IEEE |
Number of pages | 4 |
ISBN (Electronic) | 9781665493734 |
DOIs | |
Publication status | Published - 8 Jun 2023 |
Event | Joint Urban Remote Sensing Event, JURSE 2023 - Heraklion, Greece Duration: 17 May 2023 → 19 May 2023 http://jurse2023.org/ |
Publication series
Name | 2023 Joint Urban Remote Sensing Event, JURSE 2023 |
---|
Conference
Conference | Joint Urban Remote Sensing Event, JURSE 2023 |
---|---|
Abbreviated title | JURSE 2023 |
Country/Territory | Greece |
City | Heraklion |
Period | 17/05/23 → 19/05/23 |
Internet address |
Keywords
- height estimation
- knowledge distillation
- Multi-task learning
- remote sensing image
- semantic segmentation
- 2023 OA procedure