Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sung-Uk Jung is active.

Publication


Featured researches published by Sung-Uk Jung.


international symposium on visual computing | 2008

Real-Time Face Verification for Mobile Platforms

Sung-Uk Jung; Yun-Su Chung; Jang-Hee Yoo; Kiyoung Moon

We propose a novel method for real-time face verification on mobile platforms such as PDAs and cell-phones. To implement the real-time system, a fixed-point arithmetic is used, and the face components are extracted by using a fast boosting algorithm. In addition, an image reduction method is adapted by using a pre-calculated look-up table, and integral image calculation method is modified to reduce the processing time. Also, the efficient valid coefficients of a DCT transformed image in the face region are calculated, and the face features are extracted by using the EP-LDA methods. In experiments, the usefulness of the proposed method has been demonstrated on a smart phone with high face verification performances.


signal-image technology and internet-based systems | 2007

Design of Embedded Multimodal Biometric Systems

Jang-Hee Yoo; Jong-Gook Ko; Yun-Su Chung; Sung-Uk Jung; Ki Hyun Kim; Kiyoung Moon; Kyoil Chung

The embedded devices in biometrics have gained increasing attention due to the demand for reliable and cost-effective personal identification systems. However, the current available embedded devices are not suitable for the real-time implementation of a biometric application system because of the limited computational resource and memory space. In this paper, we describe the design of embedded biometric systems that identify person by using face-fingerprint or iris-fingerprint multimodal biometrics technology. To implement real-time system, the biometric algorithms are efficiently enhanced for fixed-point representation and optimized for memory and computational capacity. In addition, the most time consuming components of each biometric algorithm are implemented in a field programmable gate arrays (FPGA).


international conference on biometrics theory applications and systems | 2010

On using gait biometrics to enhance face pose estimation

Sung-Uk Jung; Mark S. Nixon

Many face biometrics systems use controlled environments where subjects are viewed directly facing the camera. This is less likely to occur in surveillance environments, so a process is required to handle the pose variation of the human head, change in illumination, and low frame rate of input image sequences. This has been achieved using scale invariant features and 3D models to determine the pose of the human subject. Then, a gait trajectory model is generated to obtain the correct the face region whilst handing the looming effect. In this way, we describe a new approach aimed to estimate accurate face pose. The contributions of this research include the construction of a 3D model for pose estimation from planar imagery and the first use of gait information to enhance the face pose estimation process.We investigate the impact of imperfect channel state information at the transmitter (CSIT) on the diversity gain in dynamic decode-and-forward (DF) relaying channels. A diversity and multiplexing tradeoff analysis is presented, which reveals that power control based on imperfect CSIT significantly improves the achievable diversity gain. It is found that if the multiplexing gain is higher than 1/2, the achievable diversity gain only depends on the CSIT of the source-destination (S-D) link and the relay-destination (R-D) link; otherwise the CSIT of the source-relay (S-R) link might also contribute. It is also found that the CSIT of the R-D link does not contribute to the overall diversity gain if the source has no CSIT. The presented results show that dynamic DF relaying supports not only a higher multiplexing gain but also a higher diversity gain than conventional DF relaying protocols.


computer vision computer graphics collaboration techniques | 2007

A robust eye detection method in facial region

Sung-Uk Jung; Jang-Hee Yoo

We describe a novel eye detection method that is robust to the obstacles such as surrounding illumination, hair, and eye glasses. The obstacles above a face image are constraints to detect eye position. These constraints affect the performance of the face applications such as face recognition, gaze tracking, and video indexing systems. To overcome this problem, the proposed method for eye detection consists of three steps. First, the self quotient images are applied to the face images by rectifying illumination. Then, unnecessary pixels for eye detection are removed by using the symmetry object filter. Next, the eye candidates are extracted by using the gradient descent which is a simple and a fast computing method. Finally, the classifier, which has trained by using AdaBoost algorithm, selects the eyes from all of the eye candidates. The usefulness of the proposed method has been demonstrated in an embedded system with the eye detection performance.


IEEE Transactions on Information Forensics and Security | 2012

On Using Gait to Enhance Frontal Face Extraction

Sung-Uk Jung; Mark S. Nixon

Visual surveillance finds increasing deployment for monitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenarios.


computer analysis of images and patterns | 2011

Detection human motion with heel strikes for surveillance analysis

Sung-Uk Jung; Mark S. Nixon

Heel strike detection is an important cue for human gait recognition and detection in visual surveillance since the heel strike position can be used to derive the gait periodicity, stride and step length. We propose a novel method for heel strike detection using a gait trajectory model, which is robust to occlusion, camera view and to low resolution which can generalize to a variety of surveillance imagery. When a person walks, the movement of the head is conspicuous and sinusoidal. The highest point of the trajectory of the head occurs when the feet cross. Our gait trajectory model is constructed from trajectory data using non-linear optimization. Then, the key frames in which the heel strike takes place are extracted. A Region Of Interest (ROI) is extracted using the silhouette image of the key frame as a filter. Finally, gradient descent is applied to detect maxima which are considered to be the time of the heel strikes. The experimental results show a detection rate of 95% on two databases. The contribution of this research is the first use of the gait trajectory in the heel strike position estimation process and we contend that the approach is a new approach for basic analysis in surveillance imagery.


signal-image technology and internet-based systems | 2007

Face Feature Extraction Using Elliptical Model Based Background Deletion and Generalized FEM

Yunsu Chung; Sung-Uk Jung; Young-Lae Bae; Kiyoung Moon

This paper addresses a new face feature extraction method using elliptical model based background deletion and the generalized facial energy map (FEM). First of all, the method utilizes the elliptical model of a face to get a good normalized face image. This elliptical model based approach, thus, can easily delete the background region of high complexity. Next, the method generates a generalized FEM from the transformed data set of normalized face images. Finally, the coefficients of DCT, potentially containing important meanings, are extracted with the above generalized FEM, and are analyzed by using LDA. Experimental results show that the method effectively extracts feature vectors with reasonable time complexity, and has good recognition performance.


ieee virtual reality conference | 2017

Real-time interactive AR system for broadcasting

Hyunwoo Cho; Sung-Uk Jung; Hyung-Keun Jee

For live television broadcast such as the educational program for children conducted through viewer participation, the smooth integration of virtual contents and the interaction between the casts and them are quite important issues. Recently there have been many attempts to make aggressive use of interactive virtual contents in live broadcast due to the advancement of AR/VR technology and virtual studio technology. These previous works have many limitations that do not support real-time 3D space recognition or immersive interaction. In this sense, we propose an augmented reality based real-time broadcasting system which perceives the indoor space using a broadcasting camera and a RGB-D camera. Also, the system can support the real-time interaction between the augmented virtual contents and the casts. The contribution of this work is the development of a new augmented reality based broadcasting system that not only enables filming using compatible interactive 3D contents in live broadcast but also drastically reduces the production costs. For the practical use, the proposed system was demonstrated in the actual broadcast program called “Ding Dong Dang Kindergarten” which is a representative children educational program on the national broadcasting channel of Korea.


international conference on computer graphics and interactive techniques | 2016

An AR-based safety training assistant in disaster for children

Sung-Uk Jung; Hyunwoo Cho; Hyung-Keun Jee

Learning how to behave in dangerous situation such as earthquake, fire and flood is an important issue for precaution and prevention of disaster in childrens education. Recently, AR/VR technologies and new devices (i.e. Oculus Rift and Microsoft Hololens) have been developed so that the virtual experience can be utilized with low expense and more reality on this purpose. In this sense, we present an AR based safety training assistant which recognizes the real space, augments the virtual objects and interacts between the objects and users in real space according to the instruction of disaster situation. The contributions of this research are that we apply new AR based safety trainer to virtual disaster in the real space so that this system can be useful for preventing the dangerous situation on the practical proposes.


World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering | 2008

Liveness Detection for Embedded Face Recognition System

Hyung-Keun Jee; Sung-Uk Jung; Jang-Hee Yoo

Collaboration


Dive into the Sung-Uk Jung's collaboration.

Top Co-Authors

Avatar

Jang-Hee Yoo

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Mark S. Nixon

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Kiyoung Moon

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Hyung-Keun Jee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Yun-Su Chung

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Hansung Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Hyunwoo Cho

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Sohee Park

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jong-Gook Ko

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ki Hyun Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge