Remote Sensing of Environment | 2019

Deep learning-based fusion of Landsat-8 and Sentinel-2 images for a harmonized surface reflectance product

 
 
 
 
 
 

Abstract


Abstract Landsat and Sentinel-2 sensors together provide the most widely accessible medium-to-high spatial resolution multispectral data for a wide range of applications, such as vegetation phenology identification, crop yield estimation, and forest disturbance detection. Improved timely and accurate observations of the Earth s surface and dynamics are expected from the synergistic use of Landsat and Sentinel-2 data, which entails coordinating the spatial resolution gap between Landsat (30\u202fm) and Sentinel-2 (10\u202fm or 20\u202fm) images. However, widely used data fusion techniques may not fulfil community s needs for generating a temporally dense reflectance product at 10\u202fm spatial resolution from combined Landsat and Sentinel-2 images because of their inherent algorithmic weaknesses. Inspired by the recent advances in deep learning, this study developed an extended super-resolution convolutional neural network (ESRCNN) to a data fusion framework, specifically for blending Landsat-8 Operational Land Imager (OLI) and Sentinel-2 Multispectral Imager (MSI) data. Results demonstrated the effectiveness of the deep learning-based fusion algorithm in yielding a consistent and comparable dataset at 10\u202fm from Landsat-8 and Sentinel-2. Further accuracy assessments revealed that the performance of the fusion network was influenced by both the number of input auxiliary Sentinel-2 images and temporal interval (i.e., difference in image acquisition dates) between auxiliary Sentinel-2 images and the target Landsat-8 image. Compared to the benchmark algorithm, area-to-point regression kriging (ATKPK), the deep learning-based fusion framework proved better in the quantitative assessment in terms of RMSE (root mean square error), correlation coefficient (CC), universal image quality index (UIQI), relative global-dimensional synthesis error (ERGAS), and spectral angle mapper (SAM). ESRCNN better preserved the reflectance distribution as the original image compared to ATPRK, resulting in an improved image quality. Overall, the developed data fusion network that blends Landsat-8 and Sentinel-2 images has the potential to help generate continuous reflectance observations of higher temporal frequency than that can be obtained from a single Landsat-like sensor.

Volume 235
Pages 111425
DOI 10.1016/j.rse.2019.111425
Language English
Journal Remote Sensing of Environment

Full Text