Abstract
Clouds in remote sensing optical images often obscure essential information. They may lead to occlusion or distortion of ground features, thereby affecting the subsequent analysis and extraction of target information. Therefore, the removal of clouds in optical images is a critical task in various applications. Synthetic aperture radar (SAR)-optical image fusion has achieved encouraging performance in the reconstruction of cloud-covered information. Such methods, however, are extremely time-consuming and computationally intensive, making them difficult to apply in practice. This letter proposes a novel feature pyramid network (FPNet) that effectively reconstructs the missing optical information. FPNet enables the extraction and fusion of multiscale features from the SAR image and the cloudy optical image, as the FPNet leverages the power of convolutional neural networks by merging the feature maps from different scales. It can learn useful features efficiently because it downsamples the input images while preserving important information, thus reducing the computational workload. Experiments are conducted on a benchmark global SEN12MS-CR dataset and a regional South Sudan dataset. Results are compared with those of state-of-the-art methods such as DSen2-CR and GLF-CR. The experimental results demonstrate that FPNet accomplishes superior performance in terms of accuracy and visual effects. Both the inference and training speeds of FPNet are fast. Specifically, it runs at 96 FPS and requires less than 4 h to train a single epoch using SEN12MS-CR on two 2080ti GPUs. Therefore, it is suitable for applying to various study areas.
Original language | English |
---|---|
Article number | 6008605 |
Pages (from-to) | 1-5 |
Number of pages | 5 |
Journal | IEEE geoscience and remote sensing letters |
Volume | 21 |
DOIs | |
Publication status | Published - 7 May 2024 |
Keywords
- Cloud removal
- Data fusion
- Deep Learning (DL)
- Remote sensing
- Synthetic aperture radar (SAR)-optical
- 2024 OA procedure