Youkyung Han
Seoul National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Youkyung Han.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2013
Young-Gi Byun; Jaewan Choi; Youkyung Han
The task of enhancing the perception of a scene by combining information captured from different image sensors is usually known as multisensor image fusion. This paper presents an area-based image fusion algorithm to merge SAR (Synthetic Aperture Radar) and optical images. The co-registration of the two images is first conducted using the proposed registration method prior to image fusion. Segmentation into active and inactive areas is then performed on the SAR texture image for selective injection of the SAR image into the panchromatic (PAN) image. An integrated image based on these two images is generated by the novel area-based fusion scheme, which imposes different fusion rules for each segmented area. Finally, this image is fused into a multispectral (MS) image through the hybrid pansharpening method proposed in previous research. Experimental results demonstrate that the proposed method shows better performance than other fusion algorithms and has the potential to be applied to the multisensor fusion of SAR and optical images.
Photogrammetric Engineering and Remote Sensing | 2012
Youkyung Han; Young Gi Byun; Jae Wan Choi; Dong Yeob Han; Yong Ii Kim
We propose an automatic image-to-image registration of high-resolution satellite images using local properties and geometric locations of control points to improve the registration accuracy. First, coefficients of global affine transformation between images are extracted using a scale-invariant feature transform (SIFT)-based method, and features of the sensed image are transformed to the reference coordinate system using these coefficients. Then, a spatial distance and orientation difference between features of the reference and sensed images are additionally used to extract a large number of evenly distributed control points. The specified spatial distance is calculated between features of the sensed image that have been transformed to the reference coordinates and features of the reference image. Finally, the spatial distance integrated with Euclidean distances of invariant vectors is employed for local matching. The average orientation differences between control points of the two images are used for outlier elimination. A mapping function model consisting of an affine transformation and piecewise linear functions is applied to the control points for automatic registration of high-resolution images. The proposed method can extract a larger number of spatially well-distributed control points than SIFT-based methods. The registration accuracy for all sites calculated from manually selected checkpoints has acceptable geometric accuracy at the pixel level.
IEEE Transactions on Geoscience and Remote Sensing | 2014
Youkyung Han; Jaewan Choi; Young-Gi Byun; Yong-Il Kim
The objective of this paper is to extract a suitable number of evenly distributed matched points, given the characteristics of the site and the sensors involved. The intent is to increase the accuracy of automatic image-to-image registration for high-resolution multisensor data. The initial set of matching points is extracted using a scale-invariant feature transform (SIFT)-based method, which is further used to evaluate the initial geometric relationship between the features of the reference and sensed images. The precise matching points are extracted considering location differences and local properties of features. The values of the parameters used in the precise matching are optimized using an objective function that considers both the distribution of the matching points and the reliability of the transformation model. In case studies, the proposed algorithm extracts an appropriate number of well-distributed matching points and achieves a higher correct-match rate than the SIFT method. The registration results for all sensors are acceptably accurate, with a root-mean-square error of less than 1.5 m.
Remote Sensing | 2015
Young-Gi Byun; Youkyung Han; Tae-Byeong Chae
Change detection based on satellite images acquired from an area at different dates is of widespread interest, according to the increasing number of flood-related disasters. The images help to generate products that support emergency response and flood management at a global scale. In this paper, a novel unsupervised change detection approach based on image fusion is introduced. The approach aims to extract the reliable flood extent from very high-resolution (VHR) bi-temporal images. The method takes an advantage of the spectral distortion that occurs during image fusion process to detect the change areas by flood. To this end, a change candidate image is extracted from the fused image generated with bi-temporal images by considering a local spectral distortion. This can be done by employing a universal image quality index (UIQI), which is a measure for local evaluation of spectral distortion. The decision threshold for the determination of changed pixels is set by applying a probability mixture model to the change candidate image based on expectation maximization (EM) algorithm. We used bi-temporal KOMPSAT-2 satellite images to detect the flooded area in the city of N′djamena in Chad. The performance of the proposed method was visually and quantitatively compared with existing change detection methods. The results showed that the proposed method achieved an overall accuracy (OA = 75.04) close to that of the support vector machine (SVM)-based supervised change detection method. Moreover, the proposed method showed a better performance in differentiating the flooded area and the permanent water body compared to the existing change detection methods.
Remote Sensing Letters | 2014
Youkyung Han; Byeonghee Kim; Yong-Il Kim; Won Hee Lee
In this article, we propose an automatic cloud detection process for images with high spatial resolution. First, thick cloud regions are detected by applying a simple threshold method to the target image (an image that includes a cloud-covered region). Next, a reference image (another image that was acquired at a different time and includes the region with relatively little or no cloud-cover) is transformed to the coordinates of the target image by a modified scale-invariant feature transform (SIFT) method. The difference between the target image and transformed reference image is used to extract the peripheral cloud regions. The thick and peripheral cloud regions are then merged based on their relative locations and areas to detect the final cloud regions. Multi-temporal Korea Multi-Purpose Satellite-2 (KOMPSAT-2) images are used to construct study sites to evaluate the proposed method for a range of cloud-cover cases. From the proposed method, a large number of correctly matched points were extracted for this generation of the transformation model, and cloud-covered regions were effectively detected for all sites without manual intervention.
Remote Sensing | 2016
A. Habib; Youkyung Han; Weifeng Xiong; Fangning He; Zhou Zhang; Melba M. Crawford
Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging is based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude.
International Journal of Remote Sensing | 2012
Youkyung Han; Hye-Jin Kim; Jaewan Choi; Yong-Il Kim
We propose a new spatial feature extraction method for supervised classification of satellite images with high spatial resolution. The proposed shape–size index (SSI) feature combines homogeneous areas using spectral similarity between one central pixel and its neighbouring pixels. A spatial index considers the shape and size of the homogeneous area, and suitable spatial features are parametrically selected. The generated SSI feature is integrated with the original high resolution multispectral bands to improve the overall classification accuracy. A support vector machine (SVM) is employed as a classifier. In order to evaluate the proposed feature extraction method, KOMPSAT-2 (Korea Multipurpose Satellite 2), QuickBird-2 and IKONOS-2 high resolution satellite images are used. The experiments show that the SSI algorithm leads to a notable increase in classification accuracy over the grey level co-occurrence matrix (GLCM) and pixel shape index (PSI) algorithms, and an increase when compared with using multispectral bands only.
IEEE Transactions on Geoscience and Remote Sensing | 2015
Youkyung Han; Francesca Bovolo; Lorenzo Bruzzone
Even after applying effective coregistration methods, multitemporal images are likely to show a residual misalignment, which is referred to as registration noise (RN). This is because coregistration methods from the literature cannot fully handle the local dissimilarities induced by differences in the acquisition conditions (e.g., the stability of the acquisition platform, the off-nadir angle of the sensor, the structure of the considered scene, etc.). This paper addresses the problem of reducing such a residual misalignment by proposing a fine automatic coregistration approach for very high resolution (VHR) multispectral images. The proposed method takes advantage of the properties of the residual misalignment itself. To this end, RN is first extracted in the change vector analysis (CVA) polar domain according to the behaviors of the specific multitemporal images considered. Then, a local analysis of RN pixels (i.e., those showing residual misalignment) is conducted for automatically extracting control points (CPs) and matching them according to their estimated displacement. Matched CPs are used for generating a deformation map by interpolation. Finally, one VHR image is warped to the coordinates of the other through a deformation map. Experiments carried out on simulated and real multitemporal VHR images confirm the effectiveness of the proposed approach.
IEEE Transactions on Geoscience and Remote Sensing | 2017
Youkyung Han; Francesca Bovolo; Lorenzo Bruzzone
In this paper, a segmentation-based approach to fine registration of multispectral and multitemporal very high resolution (VHR) images is proposed. The proposed approach aims at estimating and correcting the residual local misalignment [also referred to as registration noise (RN)] that often affects multitemporal VHR images even after standard registration. The method extracts automatically a set of object representative points associated with regions with homogeneous spectral properties (i.e., objects in the scene). Such points result to be distributed all over the considered scene and account for the high spatial correlation of pixels in VHR images. Then, it estimates the amount and direction of residual local misalignment for each object representative point by exploiting residual local misalignment properties in a multiple displacement analysis framework. To this end, a multiscale differential analysis of the multispectral difference image is employed for modeling the statistical distribution of pixels affected by residual misalignment (i.e., RN pixels) and detect them. The RN is used to perform a segmentation-based fine registration based on both temporal and spatial correlation. Accordingly, the method is particularly suitable to be used for images with a large number of border regions like VHR images of urban scenes. Experimental results obtained on both simulated and real multitemporal VHR images confirm the effectiveness of the proposed method.
IEEE Geoscience and Remote Sensing Letters | 2016
Youkyung Han; Francesca Bovolo; Lorenzo Bruzzone
Even after coregistration, very high resolution (VHR) multitemporal images acquired by different multispectral sensors (e.g., QuickBird and WordView) show a residual misregistration due to dissimilarities in acquisition conditions and in sensor properties. Residual misregistration can be considered as a source of noise and is referred to as registration noise (RN). Since RN is likely to have a negative impact on multitemporal information extraction, detecting and reducing it can increase multitemporal image processing accuracy. In this letter, we propose an approach to identify RN between VHR multitemporal and multisensor images. Under the assumption that dominant RN mainly exists along boundaries of objects, we propose to use edge information in high frequency regions to estimate it. This choice makes RN detection less dependent on radiometric differences and thus more effective in VHR multisensor image processing. In order to validate the effectiveness of the proposed approach, multitemporal multisensor data sets are built including QuickBird and WorldView VHR images. Both qualitative and quantitative assessments demonstrate the effectiveness of the proposed RN identification approach compared to the state-of-the-art one.