Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dongyeob Han is active.

Publication


Featured researches published by Dongyeob Han.


IEEE Geoscience and Remote Sensing Letters | 2011

Improved Additive-Wavelet Image Fusion

Yonghyun Kim; Changno Lee; Dongyeob Han; Yong-Il Kim; Youn-Soo Kim

Effective image-fusion methods inject the necessary geometric information and preserve the radiometric information. To preserve the radiometric information, the injected high frequency of a panchromatic (pan) image must follow the frequency of the multispectral (MS) image. In this letter, an improved additive-wavelet (AW) fusion method is presented using the à trous algorithm. The proposed method does not decompose the MS image; thus, it preserves the radiometric information of the MS image and can inject high frequency following the frequency of the MS image using a low-resolution pan image. Experimental results obtained using IKONOS data indicate that the proposed method produces superior-quality images compared with the AW luminance proportional method in a quantitative analysis.


Sensors | 2017

Effects of Ambient Temperature and Relative Humidity on Subsurface Defect Detection in Concrete Structures by Active Thermal Imaging

Quang Huy Tran; Dongyeob Han; Choonghyun Kang; Achintya Haldar; Jungwon Huh

Active thermal imaging is an effective nondestructive technique in the structural health monitoring field, especially for concrete structures not exposed directly to the sun. However, the impact of meteorological factors on the testing results is considerable and should be studied in detail. In this study, the impulse thermography technique with halogen lamps heat sources is used to detect defects in concrete structural components that are not exposed directly to sunlight and not significantly affected by the wind, such as interior bridge box-girders and buildings. To consider the effect of environment, ambient temperature and relative humidity, these factors are investigated in twelve cases of testing on a concrete slab in the laboratory, to minimize the influence of wind. The results showed that the absolute contrast between the defective and sound areas becomes more apparent with an increase of ambient temperature, and it increases at a faster rate with large and shallow delaminations than small and deep delaminations. In addition, the absolute contrast of delamination near the surface might be greater under a highly humid atmosphere. This study indicated that the results obtained from the active thermography technique will be more apparent if the inspection is conducted on a day with high ambient temperature and humidity.


Advances in Materials Science and Engineering | 2016

Experimental Study on Detection of Deterioration in Concrete Using Infrared Thermography Technique

Jungwon Huh; Quang Huy Tran; Jong-Han Lee; Dongyeob Han; Jin-Hee Ahn; Solomon Yim

Concrete is certainly prone to internal deteriorations or defects during the construction and operating periods. Compared with other nondestructive techniques, infrared thermography can easily detect the subsurface delamination in a very short period of time, but accurately identifying its size and depth in concrete is a very challenging task. In this study, experimental testing was carried out on a concrete specimen having internal delaminations of various sizes and at varying depths. Delaminations at 1 and 2 cm deep showed a good temperature contrast after only 5-minute heating, but delaminations at 3 cm practically identified the value of the temperature contrast from heating of 15 minutes. In addition, the size of the delamination at 3 cm deep could be estimated with a difference of 10% to 28% for 20 minutes of heating. The depth of the delamination was linearly correlated with the increase in its size.


Canadian Journal of Remote Sensing | 2012

Context-adaptive pansharpening algorithm for high-resolution satellite imagery

Jaewan Choi; Dongyeob Han; Yong-Il Kim

Pansharpening algorithms are important methods for overcoming the technical limitations of satellite sensors. However, most approaches to pansharpening have tended either to distort the spectral characteristics of the original multispectral image or reduce the visual sharpness of the panchromatic image. In this paper, we propose a pansharpening algorithm that uses both a global and a local context-adaptive parameter based on component substitution. The purpose of this algorithm is to produce fused images with spectral information similar to that of multispectral data while preserving spatial sharpness better than other algorithms such as the Gram–Schmidt algorithm, Additive Wavelet Luminance Proportional algorithm, and algorithms developed in our previous work. The proposed parameter is calculated using the spatial and spectral characteristics of an image. It is derived from a spatial correlation between each multispectral band and adjusted-intensity image based on Laplacian filtering and statistical ratios, and the parameter is subsequently adaptively adjusted using image entropy to optimize the fusion quality in terms of the sensor and image characteristics. An experiment using IKONOS-2, QuickBird-2, and Geoeye-1 imagery demonstrated that a global context-adaptive parameter model is effective for spatial enhancement and that a local context-adaptive model is useful for image visualization and spectral information preservation.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2015

Block-Based Fusion Algorithm With Simulated Band Generation for Hyperspectral and Multispectral Images of Partially Different Wavelength Ranges

Yeji Kim; Jaewan Choi; Dongyeob Han; Yong-Il Kim

As current and future satellite systems provide both hyperspectral and multispectral images, a need has arisen for image fusion using hyperspectral and multispectral images to improve the fusion quality. This study introduces a hyperspectral image fusion algorithm using multispectral images with a higher spatial resolution and partially different wavelength range compared with the corresponding hyperspectral images. This study focuses on an image fusion technique that enhances the spatial quality and preserves the spectral information of hyperspectral images. The proposed algorithm generates a simulated multispectral band via a spectral unmixing technique and extracts high-frequency information based on blocks of associated bands. The algorithm was applied to Compact Airborne Spectrographic Imager (CASI) datasets acquired in two modes and was compared with two existing methods. Although the wavelength range of the multispectral image did not coincide with that of the hyperspectral image, the proposed algorithm efficiently improved the spatial details and preserved the spectral information of the fused results.


international geoscience and remote sensing symposium | 2012

Automatic registration of high-resolution optical and SAR images based on an integrated intensity- and feature-based approach

Youkyung Han; Yong-Min Kim; Junho Yeom; Dongyeob Han; Yong-Il Kim

Precise image-to-image registration is required to use multi-sensor data implementing a diversity of applications related with remote sensing. The purpose of this paper is to develop an automatic algorithm that co-registers high-resolution optical and SAR images based on an integrated intensity-and feature-based approach. As a pre-registration step, initial differences between the translation of the x and y directions between images were estimated with the Simulated Annealing optimization method using Mutual Information as an objective function. After the pre-registration, the line features were extracted to design a cost function that finds matching features based on the similarities of their locations and gradient orientations. Only one feature at each regular grid region having a minimum value of cost function was selected as a final matching point to extract the large number of well-distributed points. The final points were then used to construct a transformation combining the piecewise linear function with the affine transformation to increase the accuracy of the geometric correction.


Remote Sensing Letters | 2018

Efficient seamline determination for UAV image mosaicking using edge detection

Truong Linh Nguyen; Young-Gi Byun; Dongyeob Han; Jungwon Huh

ABSTRACT Image mosaicking of data from individual high-resolution unmanned aerial vehicle (UAV) images is required to obtain sufficient coverage over extensive roads. During the mosaicking process, seamlines may be generated due to differences in illumination or projection between individual images, or the presence of moving objects. This study presents an efficient seamline determination technique based on edge detection for UAV road surface images. The algorithm can be divided into three main steps. First, we detect the edges in the overlapping intensity image within the road boundary. Next, we obtain an automatic seamline passing through regions of non-attraction in areas of overlap. Finally, we adjust the values of the overlapping region using the values of the corresponding individual images by following the coordinates of the seamline detected in the second step, ultimately creating an image mosaic. The experiment using UAV images of a road surface demonstrates that the proposed method produces a satisfactory result. The proposed method can be applied for quick mosaicking of UAV images intended for maintaining road safety.


Journal of Physics: Conference Series | 2017

Real-time UAV trajectory generation using feature points matching between video image sequences

Younggi Byun; Jeongheon Song; Dongyeob Han

Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system


digital heritage international congress | 2013

The image based modelling of Jinnamgwan

Dongyeob Han; Hongsung Jin

Jinnamgwan is a National Treasure of Korea and a symbol of Yeosu city. Its a single-story wooden building which was built in 1599 after the Imjin War. Jinnamgwan has 68 high columns which is supporting the collar beams. The side blocks of the building have two side beams whose heads are stretching above the girders. Those many columns make it more difficult to perform image-based modeling using unstructured image set. Many researchers have combined both photogrammetry and terrestrial laser scanning in different ways. In our paper, we used only images which can be acquired from some internet sites. Structure from motion refers to the process of estimating three-dimensional structures from two-dimensional image sequences. It has been studied in the fields of computer vision and visual perception. We applied a modified structure from motion algorithm for its 3D modeling using the Photoscan software. Close range images were used for column modeling and aerial images were used for roof modeling. A tens of multi-temporal close range images were acquired from web sites such as google, some Korea portals etc.. Because we could not construct a roof model from a few close range images of roofs, three digital aerial images were processed. In our experiment a building model of Jinnamgwan could not be made using all images at the same time. So all close-range images were divided into sub-class groups representing front and side facades. The model from each groups images constitutes a part of Jinnamgwan. In the future, to integrate sub-models into a whole wall model accurately, we will do a simultaneous adjustment to compensate for a systematic errors applying a 3 dimensional conformal transformation to point clouds of sub-models.


international geoscience and remote sensing symposium | 2012

Object-based classification and building extraction by integrating airborne LiDAR data and aerial image

Yong-Min Kim; Youkyung Han; Junho Yeom; Dongyeob Han; Yong-Il Kim

It is generally difficult to classify an object type having different colors into the same class using only optical data such as a satellite or aerial image. This paper proposes a method that solves this problem by combining LiDAR data and an aerial image. The method extracts building pixels from LiDAR data and then identifies building objects on the aerial image by overlaying the LiDAR result to a segmented aerial image through the definite rule. This process plays a role in transforming building objects of LiDAR data to ones of the aerial image.

Collaboration


Dive into the Dongyeob Han's collaboration.

Top Co-Authors

Avatar

Yong-Il Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Yong-Min Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jaewan Choi

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Jungwon Huh

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Quang Huy Tran

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Youkyung Han

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Choonghyun Kang

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Hongsung Jin

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Hyoseong Lee

Sunchon National University

View shared research outputs
Top Co-Authors

Avatar

Yang-Dam Eo

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge