Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yeejin Lee is active.

Publication


Featured researches published by Yeejin Lee.


international conference on image processing | 2014

Stereo image defogging

Yeejin Lee; Kristofor B. Gibson; Zucheul Lee; Truong Q. Nguyen

This paper presents a new approach to estimate fog-free images from stereo foggy images. We investigate a new way to estimate transmission by computing the scattering coefficient and depth information of a scene. However, most existing visibility restoration algorithms estimate transmission independently on scattering coefficient and object distance. In the proposed method, the natural color of a foggy image is recovered using depth information from a stereo image pair even though prior knowledge or multiple images taken at different times are not required. Furthermore, we explore a new way to measure the scattering coefficient by using a stereo image pair from an image processing perspective. Experimental results verify that the proposed method outperforms the conventional defogging methods.


congress on evolutionary computation | 2007

Improving generalization capability of neural networks based on simulated annealing

Yeejin Lee; Jong Seok Lee; Sun-Young Lee; Cheol Hoon Park

This paper presents a single-objective and a multiobjective stochastic optimization algorithms for global training of neural networks based on simulated annealing. The algorithms overcome the limitation of local optimization by the conventional gradient-based training methods and perform global optimization of the weights of the neural networks. Especially, the multiobjective training algorithm is designed to enhance generalization capability of the trained networks by minimizing the training error and the dynamic range of the network weights simultaneously. For fast convergence and good solution quality of the algorithms, we suggest the hybrid simulated annealing algorithm with the gradient-based local optimization method. Experimental results show that the performance of the trained networks by the proposed methods is better than that by the gradient-based local training algorithm and, moreover, the generalization capability of the networks is significantly improved by preventing overfitting phenomena.


IEEE Transactions on Image Processing | 2017

Joint Defogging and Demosaicking

Yeejin Lee; Keigo Hirakawa; Truong Nguyen

Image defogging is a technique used extensively for enhancing visual quality of images in bad weather conditions. Even though defogging algorithms have been well studied, defogging performance is degraded by demosaicking artifacts and sensor noise amplification in distant scenes. In order to improve the visual quality of restored images, we propose a novel approach to perform defogging and demosaicking simultaneously. We conclude that better defogging performance with fewer artifacts can be achieved when a defogging algorithm is combined with a demosaicking algorithm simultaneously. We also demonstrate that the proposed joint algorithm has the benefit of suppressing noise amplification in distant scenes. In addition, we validate our theoretical analysis and observations for both synthesized data sets with ground truth fog-free images and natural scene data sets captured in a raw format.


IEEE Signal Processing Letters | 2014

Depth-Assisted Frame Rate Up-Conversion for Stereoscopic Video

Xiaohui Yang; Ju Liu; Jiande Sun; Yeejin Lee; Truong Q. Nguyen

In this letter, we propose a depth-assisted frame rate up-conversion (DA-FRUC) scheme which finds applications in 3D video processing. By considering the depth cue in video plus depth representation, we categorize the blocks of the interpolated frame as depth-continuous and depth-discontinuous groups. The motion vector (MV) outliers of the depth continuous blocks are then detected and corrected by layer-constrained MV refinement method. Moreover, a depth-based adaptive interpolation and block segmentation method is proposed to deal with the disocclusion and occlusion at the boundary of the foreground area. Experimental results show that the proposed method achieves better subjective and objective performance for the interpolated color frames. Additionally, the interpolated depth frames obtained by the proposed method is more accurate, which benefit the video quality in view synthesis.


international soc design conference | 2015

Analysis of color rendering transformation effects on dehazing performance

Yeejin Lee; Truong Q. Nguyen

Dehazing is an image restoration technique generally applied as a post-processing step in a camera processing pipeline. The performance of dehazing algorithms is strongly dependent on the electro-optics transfer curve of image acquisition. In order to reduce color distortion on restored images, we investigate how color rendering transformations affect the dehazing algorithms in a camera processing pipeline. First, we develop numerical analysis on how a hazy image model is modified by color rendering transformations in a pipeline. Based on the analysis and observations, we conclude that better dehazing performance can be achieved when a dehazing algorithm is applied before color rendering transformations. In addition to the theoretical analysis, experimental results validate our conclusion.


IVMSP 2013 | 2013

Frame rate up conversion of 3D video by motion and depth fusion

Yeejin Lee; Zucheul Lee; Truong Q. Nguyen

This paper presents a new frame-rate up conversion method for video plus depth representation. The proposed method improves the quality of video using additional depth information. The depth information is utilized in three ways to improve the accuracy of motion estimation. One usage is that depth frame is added in block matching criterion to estimate initial motion. Another usage is to detect object boundary region combining depth value variation and local depth distribution adjustment technique. Finally, motion vector field is refined adaptively by segmenting macroblocks into object regions. Any existing block-based motion-compensated frame interpolation methods may utilize our method to refine motion vector field by using depth frames. Experimental results verify that the proposed method outperforms the conventional motion-compensated frame interpolation algorithms, while preserving object structure.


IEEE Transactions on Image Processing | 2018

Camera-Aware Multi-Resolution Analysis for Raw Image Sensor Data Compression

Yeejin Lee; Keigo Hirakawa; Truong Q. Nguyen

We propose novel lossless and lossy compression schemes for color filter array (CFA) sampled images based on the <underline>C</underline>amera-<underline>A</underline>ware <underline>M</underline>ulti-<underline>R</underline>esolution <underline>A</underline>nalysis, or CAMRA. Specifically, by CAMRA we refer to modifications that we make to wavelet transform of CFA sampled images in order to achieve a very high degree of decorrelation at the finest scale wavelet coefficients; and a series of color processing steps applied to the coarse scale wavelet coefficients, aimed at limiting the propagation of lossy compression errors through the subsequent camera processing pipeline. We validated our theoretical analysis and the performance of the proposed compression schemes using the images of natural scenes captured in a raw format. The experimental results verify that our proposed methods improve coding efficiency relative to the standard and the state-of-the-art compression schemes for CFA sampled images.


international soc design conference | 2016

Nighttime image enhancement applying dark channel prior to raw data from camera

Yan Gong; Yeejin Lee; Truong Q. Nguyen

This paper presents an alternative approach to enhance visibility of images captured at night. In order to improve visual quality of nighttime image, we perform image enhancement in linear sensor space, which best represents image formation models. The analyses show that the proposed approach is helpful to reduce error amplification and computational complexity. In addition, experimental results verify that the proposed method enhances visibility for real-world dataset.


international soc design conference | 2016

Dehazing in color filter array domain

Yeejin Lee; Truong Q. Nguyen; Changyoung Han

This paper introduces an alternative image processing pipeline for efficiently implementing dehazing algorithms. In the proposed pipeline framework, demosaicking is performed after dehazing process and transmission is estimated in a color filter array plane itself. The proposed method reduces implementation complexity and suppresses demosaicking artifacts amplification. Experimental results verify that the proposed pipeline framework requires less memory resources and reduces computational complexity while preserving visual quality.


international symposium on visual computing | 2015

Efficient Hand Articulations Tracking Using Adaptive Hand Model and Depth Map

Byeongkeun Kang; Yeejin Lee; Truong Q. Nguyen

Real-time hand articulations tracking is important for many applications such as interacting with virtual/augmented reality devices. However, most of existing algorithms highly rely on expensive and high power-consuming GPUs to achieve real-time processing. Consequently, these systems are inappropriate for mobile and wearable devices. In this paper, we propose an efficient hand tracking system which does not require high performance GPUs.

Collaboration


Dive into the Yeejin Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Changyoung Han

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Truong Nguyen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zucheul Lee

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Gong

University of California

View shared research outputs
Top Co-Authors

Avatar

Jiande Sun

Shandong Normal University

View shared research outputs
Top Co-Authors

Avatar

Ju Liu

Shandong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge