Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joongrock Kim is active.

Publication


Featured researches published by Joongrock Kim.


Journal of Applied Physics | 2008

Resistance switching characteristics in Li-doped NiO

Kyooho Jung; Joonhyuk Choi; Yongmin Kim; Hyunsik Im; Sunae Seo; Ranju Jung; Dong-chul Kim; Joongrock Kim; Bae Ho Park; Jung-Pyo Hong

We investigated the effects of lithium (Li)-doping on bi-stable resistance switching in polycrystalline NiO film in the temperature range of 10 K<T<300 K. Compliance-dependent resistive switching transport revealed some distinctive and interesting features not observed in undoped NiO films previously studied. An analysis of the temperature dependence of the resistive switching transport showed that Li-doping could modify the thermal properties of the off-state leading to a stable on/off switching operation. It is clearly shown that doping Li in NiO can improve NiO’s retention properties and stability of on/off switching voltages.


EURASIP Journal on Advances in Signal Processing | 2012

3D hand tracking using Kalman filter in depth space

Sangheon Park; Sunjin Yu; Joongrock Kim; Sungjin Kim; Sangyoun Lee

Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.


Sensors | 2015

Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

Kwangtaek Kim; Joongrock Kim; Jaesung Choi; Jung Hyun Kim; Sangyoun Lee

Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the users hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the users gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.


Sensors | 2012

3D face modeling using the multi-deformable method

Jinkyu Hwang; Sunjin Yu; Joongrock Kim; Sangyoun Lee

In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our methods performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.


Pattern Recognition | 2017

An adaptive local binary pattern for 3D hand tracking

Joongrock Kim; Sunjin Yu; Dong-Chul Kim; Kar-Ann Toh; Sangyoun Lee

Abstract Ever since the availability of real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor, the performance of gesture recognition can be largely enhanced. However, since conventional two-dimensional (2D) image based feature extraction methods such as local binary pattern (LBP) generally use texture information, they cannot be applied to depth or range image which does not contain texture information. In this paper, we propose an adaptive local binary pattern (ALBP) for effective depth images based applications. Contrasting to the conventional LBP which is only rotation invariant, the proposed ALBP is invariant to both rotation and the depth distance in range images. Using ALBP, we can extract object features without using texture or color information. We further apply the proposed ALBP for hand tracking using depth images to show its effectiveness and its usefulness. Our experimental results validate the proposal.


Sensors | 2013

3D Multi-Spectrum Sensor System with Face Recognition

Joongrock Kim; Sunjin Yu; Ig Jae Kim; Sangyoun Lee

This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained.


conference on industrial electronics and applications | 2009

Nonintrusive 3-D face data acquisition system

Joongrock Kim; Sunjin Yu; Jinkyu Hwang; Soo-Yeon Kim; Sangyoun Lee

This paper describes a nonintrusive three-dimensional (3-D) face data acquisition system consisting of a stereo vision system and an 850 nm near-infrared line laser. Although a two-dimensional (2-D) face recognition system can achieve a reliable recognition rate, its performance can be degraded by illumination and pose variation. To alleviate these factors in 2-D face recognition, 3-D face recognition has received much attention. To develop a reliable 3-D face recognition system, many researchers have also focused on 3-D face data acquisition. Earlier 3-D face acquisition systems use visible patterns as features to obtain accurate 3-D data, which makes anyone who wants to be verified uncomfortable. In this paper, we propose a novel 850 nm infrared line laser pattern which is almost invisible for 3-D face data acquisition. The reconstructed 3-D face data consists of over 20,000 3-D points; these data can be used effectively for 3-D face recognition.


Optical Engineering | 2009

Iterative three-dimensional head pose estimation using a face normal vector

Sunjin Yu; Joongrock Kim; Sangyoun Lee

The performance of face recognition systems has much been burdened by head pose variation. To solve this problem, 3-D face recognition systems that make use of multiple views and depth information have been suggested. However, without an accurate head pose estimation, the performance improvement of 3-D face recognition systems under pose variations remains limited. Previous research on 3-D head pose estimation has been conducted in 3-D space, where the estimation complexity is high. Also it is difficult to incorporate those salient 2-D face features for effective estimation. We propose a novel iterative 3-D head pose estimation method incorporating both 2-D and 3-D face information. To verify the effectiveness, we apply the proposed method to 3-D face modeling and recognition systems with adaptation to various 3-D face data acquisition devices. Our experimental results show that the proposed method can be very effective in terms of modeling and recognition applications, particularly on combining different kinds of acquisition devices, which use different coordinates of origin and scale.


Sensors | 2014

Random-Profiles-Based 3D Face Recognition System

Joongrock Kim; Sunjin Yu; Sangyoun Lee

In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.


conference on industrial electronics and applications | 2011

Registration method between ToF and color cameras for face recognition

Joongrock Kim; Sangheon Park; Soo-Yeon Kim; Sangyoun Lee

Since 3D data has additional depth information than 2D image, it can be used various researches and applications such as robot vision, car navigation and human computer interaction. Especially, 3D face recognition is robust against facial pose and light variation since it can use 3D face shape information such as facial profile, curvature and depth map. However, it is difficult to acquire 3D face data in real-time since conventional 3D data acquisition systems like laser scanner or structured light system are very slow for the data capturing. Recently, Time-of-Flight (ToF) camera provides full range distance data at high framerates and thus comes into the spotlight as an alternative to conventional 3D acquisition systems. ToF camera provides both distance images and gray-scale images simultaneously in realtime. However, it has drawbacks which are low resolution and gray-scale image, not color image. In this paper, a registration method of color and ToF cameras is presented to find precise corresponding color pixel information with range distance data from ToF camera.

Collaboration


Dive into the Joongrock Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge