Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jaewan Choi is active.

Publication


Featured researches published by Jaewan Choi.


IEEE Transactions on Geoscience and Remote Sensing | 2011

A New Adaptive Component-Substitution-Based Satellite Image Fusion by Using Partial Replacement

Jaewan Choi; Kiyun Yu; Yong-Il Kim

Preservation of spectral information and enhancement of spatial resolution are regarded as important issues in remote sensing satellite image fusion. In previous research, various algorithms have been proposed. Although they have been successful, there are still some margins of spatial and spectral quality that can be improved. In addition, a new method that can be used for various types of sensors is required. In this paper, a new adaptive fusion method based on component substitution is proposed to merge a high-spatial-resolution panchromatic (PAN) image with a multispectral image. This method generates high-/low-resolution synthetic component images by partial replacement and uses statistical ratio-based high-frequency injection. Various remote sensing satellite images, such as IKONOS-2, QuickBird, LANDSAT ETM+, and SPOT-5, were employed in the evaluation. Experiments showed that this approach can resolve spectral distortion problems and successfully conserve the spatial information of a PAN image. Thus, the fused image obtained from the proposed method gave higher fusion quality than the images from some other methods. In addition, the proposed method worked efficiently with the different sensors considered in the evaluation.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2013

An Area-Based Image Fusion Scheme for the Integration of SAR and Optical Satellite Imagery

Young-Gi Byun; Jaewan Choi; Youkyung Han

The task of enhancing the perception of a scene by combining information captured from different image sensors is usually known as multisensor image fusion. This paper presents an area-based image fusion algorithm to merge SAR (Synthetic Aperture Radar) and optical images. The co-registration of the two images is first conducted using the proposed registration method prior to image fusion. Segmentation into active and inactive areas is then performed on the SAR texture image for selective injection of the SAR image into the panchromatic (PAN) image. An integrated image based on these two images is generated by the novel area-based fusion scheme, which imposes different fusion rules for each segmented area. Finally, this image is fused into a multispectral (MS) image through the hybrid pansharpening method proposed in previous research. Experimental results demonstrate that the proposed method shows better performance than other fusion algorithms and has the potential to be applied to the multisensor fusion of SAR and optical images.


IEEE Transactions on Geoscience and Remote Sensing | 2014

Parameter Optimization for the Extraction of Matching Points Between High-Resolution Multisensor Images in Urban Areas

Youkyung Han; Jaewan Choi; Young-Gi Byun; Yong-Il Kim

The objective of this paper is to extract a suitable number of evenly distributed matched points, given the characteristics of the site and the sensors involved. The intent is to increase the accuracy of automatic image-to-image registration for high-resolution multisensor data. The initial set of matching points is extracted using a scale-invariant feature transform (SIFT)-based method, which is further used to evaluate the initial geometric relationship between the features of the reference and sensed images. The precise matching points are extracted considering location differences and local properties of features. The values of the parameters used in the precise matching are optimized using an objective function that considers both the distribution of the matching points and the reliability of the transformation model. In case studies, the proposed algorithm extracts an appropriate number of well-distributed matching points and achieves a higher correct-match rate than the SIFT method. The registration results for all sensors are acceptably accurate, with a root-mean-square error of less than 1.5 m.


IEEE Geoscience and Remote Sensing Letters | 2013

Hybrid Pansharpening Algorithm for High Spatial Resolution Satellite Imagery to Improve Spatial Quality

Jaewan Choi; Junho Yeom; Anjin Chang; Young-Gi Byun; Yong-Il Kim

Most pansharpened images from existing algorithms are apt to present a tradeoff relationship between the spectral preservation and the spatial enhancement. In this letter, we developed a hybrid pansharpening algorithm based on primary and secondary high-frequency information injection to efficiently improve the spatial quality of the pansharpened image. The injected high-frequency information in our algorithm is composed of two types of data, i.e., the difference between panchromatic and intensity images, and the Laplacian filtered image of high-frequency information. The extracted high frequencies are injected by the multispectral image using the local adaptive fusion parameter and postprocessing of the fusion parameter. In the experiments using various satellite images, our results show better spatial quality than those of other fusion algorithms while maintaining as much spectral information as possible.


IEEE Geoscience and Remote Sensing Letters | 2015

Object-Based Change Detection of Very High Resolution Satellite Imagery Using the Cross-Sharpening of Multitemporal Data

Biao Wang; Seok-Keun Choi; Young-Gi Byun; Soungki Lee; Jaewan Choi

In this letter, we present a method for unsupervised change detection based on the cross-sharpening of multitemporal images and image segmentation. Our method effectively reduces the change detection errors caused by relief or spatial displacement between multitemporal images with different acquisition angles. A total of four cross-sharpened images, including two general pansharpened images, were generated. Then, two pairs of cross-sharpened images were analyzed using change detection indexes. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation.


International Journal of Remote Sensing | 2012

A shape–size index extraction for classification of high resolution multispectral satellite images

Youkyung Han; Hye-Jin Kim; Jaewan Choi; Yong-Il Kim

We propose a new spatial feature extraction method for supervised classification of satellite images with high spatial resolution. The proposed shape–size index (SSI) feature combines homogeneous areas using spectral similarity between one central pixel and its neighbouring pixels. A spatial index considers the shape and size of the homogeneous area, and suitable spatial features are parametrically selected. The generated SSI feature is integrated with the original high resolution multispectral bands to improve the overall classification accuracy. A support vector machine (SVM) is employed as a classifier. In order to evaluate the proposed feature extraction method, KOMPSAT-2 (Korea Multipurpose Satellite 2), QuickBird-2 and IKONOS-2 high resolution satellite images are used. The experiments show that the SSI algorithm leads to a notable increase in classification accuracy over the grey level co-occurrence matrix (GLCM) and pixel shape index (PSI) algorithms, and an increase when compared with using multispectral bands only.


Remote Sensing | 2017

Sharpening the VNIR and SWIR Bands of Sentinel-2A Imagery through Modified Selected and Synthesized Band Schemes

Honglyun Park; Jaewan Choi; Nyunghee Park; Seok-Keun Choi

In this work, the bands of a Sentinel-2A image with spatial resolutions of 20 m and 60 m are sharpened to a spatial resolution of 10 m to obtain visible and near-infrared (VNIR) and shortwave infrared (SWIR) spectral bands with a spatial resolution of 10 m. In particular, we propose a two-step sharpening algorithm for Sentinel-2A imagery based on modified, selected, and synthesized band schemes using layer-stacked bands to sharpen Sentinel-2A images. The modified selected and synthesized band schemes proposed in this study extend the existing band schemes for sharpening Sentinel-2A images with spatial resolutions of 20 m and 60 m to improve the pan-sharpening accuracy by changing the combinations of bands used for multiple linear regression analysis through band-layer stacking. The proposed algorithms are applied to the pan-sharpening algorithm based on component substitution (CS) and a multiresolution analysis (MRA), and our results are then compared to the sharpening results when using sharpening algorithms based on existing band schemes. The experimental results show that the sharpening results from the proposed algorithm are improved in terms of the spatial and spectral properties when compared to existing methods. However, the results of the sharpening algorithm when applied to our modified band schemes show differing tendencies. With the modified, selected band scheme, the sharpening result when applying the CS-based algorithm is higher than the result when applying the MRA-based algorithm. However, the quality of the sharpening results when using the MRA-based algorithm with the modified synthesized band scheme is higher than that when using the CS-based algorithm.


Canadian Journal of Remote Sensing | 2012

Context-adaptive pansharpening algorithm for high-resolution satellite imagery

Jaewan Choi; Dongyeob Han; Yong-Il Kim

Pansharpening algorithms are important methods for overcoming the technical limitations of satellite sensors. However, most approaches to pansharpening have tended either to distort the spectral characteristics of the original multispectral image or reduce the visual sharpness of the panchromatic image. In this paper, we propose a pansharpening algorithm that uses both a global and a local context-adaptive parameter based on component substitution. The purpose of this algorithm is to produce fused images with spectral information similar to that of multispectral data while preserving spatial sharpness better than other algorithms such as the Gram–Schmidt algorithm, Additive Wavelet Luminance Proportional algorithm, and algorithms developed in our previous work. The proposed parameter is calculated using the spatial and spectral characteristics of an image. It is derived from a spatial correlation between each multispectral band and adjusted-intensity image based on Laplacian filtering and statistical ratios, and the parameter is subsequently adaptively adjusted using image entropy to optimize the fusion quality in terms of the sensor and image characteristics. An experiment using IKONOS-2, QuickBird-2, and Geoeye-1 imagery demonstrated that a global context-adaptive parameter model is effective for spatial enhancement and that a local context-adaptive model is useful for image visualization and spectral information preservation.


Remote Sensing | 2017

Image Fusion-Based Land Cover Change Detection Using Multi-Temporal High-Resolution Satellite Images

Biao Wang; Jaewan Choi; Seok-Keun Choi; Soungki Lee; Penghai Wu; Yan Gao

Change detection is usually treated as a problem of explicitly detecting land cover transitions in satellite images obtained at different times, and helps with emergency response and government management. This study presents an unsupervised change detection method based on the image fusion of multi-temporal images. The main objective of this study is to improve the accuracy of unsupervised change detection from high-resolution multi-temporal images. Our method effectively reduces change detection errors, since spatial displacement and spectral differences between multi-temporal images are evaluated. To this end, a total of four cross-fused images are generated with multi-temporal images, and the iteratively reweighted multivariate alteration detection (IR-MAD) method—a measure for the spectral distortion of change information—is applied to the fused images. In this experiment, the land cover change maps were extracted using multi-temporal IKONOS-2, WorldView-3, and GF-1 satellite images. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation. The proposed method achieved an overall accuracy of 80.51% and 97.87% for cases 1 and 2, respectively. Moreover, the proposed method performed better when differentiating the water area from the vegetation area compared to the existing change detection methods. Although the water area beneath moderate and sparse vegetation canopy was captured, vegetation cover and paved regions of the water body were the main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the water body edge. Nevertheless, the proposed method, in conjunction with high-resolution satellite imagery, offers a robust and flexible approach to land cover change mapping that requires no ancillary data for rapid implementation.


Canadian Journal of Remote Sensing | 2008

Stereo-mate generation of high-resolution satellite imagery using a parallel projection model

Howook Chang; Kiyun Yu; Hyunseung Joo; Yong-Il Kim; Hye-Jin Kim; Jaewan Choi; Dong Yeob Han; Yang Dam Eo

Synthesis methods to create a stereo-mate of satellite imagery from an orthophoto have been developed in many previous studies. If these methods are applied in an urban area where there are many adjacent tall buildings, stereo viewing is inhibited by occlusion in the orthophoto and its stereo-mate. In high-resolution satellite imagery, the in-track view angle of the image is usually far from vertical; consequently, the occluded area near tall structures occupies a large area, and this severely affects stereo viewing. This study proposes a different approach to creating stereo-mates for high-resolution satellite imagery by projection of the digital surface model (DSM) draped by the original single image onto a fictitious satellite sensor model. The main benefit of this method is enhanced stereo viewing by arranging the fictitious sensor model to reduce occlusion area. The physical sensor model of the original image is previously derived by parallel projection model, and then the stereo-mate fictitious sensor model is determined from the physical sensor model.

Collaboration


Dive into the Jaewan Choi's collaboration.

Top Co-Authors

Avatar

Yong-Il Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Youkyung Han

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Kiyun Yu

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Young-Gi Byun

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Anjin Chang

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Dongyeob Han

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Hye-Jin Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Biao Wang

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Seok-Keun Choi

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Yong-Min Kim

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge