Abstract
Assigning geospatial objects with specific categories at the pixel level is a fundamental task in remote sensing image analysis. Along with the rapid development of sensor technologies, remotely sensed images can be captured at multiple spatial resolutions (MSR) with information content manifested at different scales. Extracting information from these MSR images represents huge opportunities for enhanced feature representation and characterisation. However, MSR images suffer from two critical issues: (1) increased scale variation of geo-objects and (2) loss of detailed information at coarse spatial resolutions. To bridge these gaps, in this paper, we propose a novel scale-aware neural network (SaNet) for the semantic segmentation of MSR remotely sensed imagery. SaNet deploys a densely connected feature network (DCFFM) module to capture high-quality multi-scale context, such that the scale variation is handled properly and the quality of segmentation is increased for both large and small objects. A spatial feature recalibration (SFRM) module was further incorporated into the network to learn intact semantic content with enhanced spatial relationships, where the negative effects of information loss are removed. The combination of DCFFM and SFRM allows SaNet to learn scale-aware feature representation, which outperforms the existing multi-scale feature representation. Extensive experiments on three semantic segmentation datasets demonstrated the effectiveness of the proposed SaNet in cross-resolution segmentation.
Original language | English |
---|---|
Article number | 5015 |
Pages (from-to) | 1-19 |
Number of pages | 19 |
Journal | Remote sensing |
Volume | 13 |
Issue number | 24 |
DOIs | |
Publication status | Published - 10 Dec 2021 |
Keywords
- Deep convolutional neural network
- Multiple spatial resolutions
- Remote sensing
- Scale-aware feature representation
- Semantic segmentation
- ITC-ISI-JOURNAL-ARTICLE
- ITC-GOLD