Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junyeong Choi is active.

Publication


Featured researches published by Junyeong Choi.


IEEE Transactions on Consumer Electronics | 2009

Interactive e-learning system using pattern recognition and augmented reality

Sang Hwa Lee; Junyeong Choi; Jong-Il Park

This paper proposes an interactive e-learning system using pattern recognition and augmented reality. The goal of proposed system is to provide students with realistic audio-visual contents when they are leaning. The proposed e-learning system consists of image recognition, color and polka-dot pattern recognition, and augmented reality engine with audio-visual contents. When the Web camera on a PC captures the current page of textbook, the e-learning system first identifies the images on the page, and augments some audio-visual contents on the monitor. For interactive learning, the proposed e-learning system exploits the color-band or polka-dot markers which are stuck to the end of a finger. The color-band and polka-dot marker act like the mouse cursor to indicate the position in the textbook image. Appropriate interactive audio-visual contents are augmented as the marker is located on the predefined image objects in the textbook. The proposed e-learning system was applied to the educational courses in the elementary school, and we obtained satisfactory results for real applications. We expect that the proposed e-learning system is popular when the educational contents and scenarios are sufficiently provided.


international conference on image processing | 2011

Hand shape recognition using distance transform and shape decomposition

Junyeong Choi; Hanhoon Park; Jong-Il Park

Hand shape is a natural and human-friendly interface for human-computer interaction. This paper proposes a realtime and 2D vision-based hand shape recognition method. The method is robust to hand pose changes because the hand pose is neutralized after recognizing a hand pose using distance transform, principal component analysis (PCA), and histogram analysis. Also, the context-based recognition method using shape decomposition can effectively recognize tiny changes of fingers. The method worked at 44.8 fps and had a recognition rate of 83% on average in the experiment with 800 images including 5 hand shapes and 16 hand poses.


virtual reality continuum and its applications in industry | 2009

Robust hand detection for augmented reality interface

Junyeong Choi; Byung-Kuk Seo; Jong-Il Park

For interactive augmented reality, vision-based and hand-gesture-based interface are most desirable due to being natural and human-friendly. However, detecting hands and recognizing hand gestures in cluttered background are still challenging. Especially, if the background includes a large skin-colored region, the problem becomes more difficult. In this paper, we focus on detecting a hand reliably and propose an effective method. Our method is basically based on the assumption that a hand-forearm region (including a hand and part of a forearm) has different brightness from other skin-colored regions. Specifically, we first segment the hand-forearm region from other skin-colored regions based on the brightness difference which is represented by edges in this paper. Then, we extract the hand region from the hand-forearm region by detecting a feature point which indicates the wrist. Finally, we extract the hand by using the brightness-based segmentation which is slightly different from the hand-forearm region detection. We verify the effectiveness of our method by implementing a simple hand gesture interface based on our method and applying it to augmented reality applications.


Journal of Broadcast Engineering | 2013

A Study on Hand Region Detection for Kinect-Based Hand Shape Recognition

Hanhoon Park; Junyeong Choi; Jong-Il Park; Kwang-Seok Moon

Hand shape recognition is a fundamental technique for implementing natural human-computer interaction. In this paper, we discuss a method for effectively detecting a hand region in Kinect-based hand shape recognition. Since Kinect is a camera that can capture color images and infrared images (or depth images) together, both images can be exploited for the process of detecting a hand region. That is, a hand region can be detected by finding pixels having skin colors or by finding pixels having a specific depth. Therefore, after analyzing the performance of each, we need a method of properly combining both to clearly extract the silhouette of hand region. This is because the hand shape recognition rate depends on the fineness of detected silhouette. Finally, through comparison of hand shape recognition rates resulted from different hand region detection methods in general environments, we propose a high-performance hand region detection method.


Optical Engineering | 2013

iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

Junyeong Choi; Jungsik Park; Hanhoon Park; Jong-Il Park

Abstract. The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand’s palm through a built-in camera. The virtual contents are faithfully rendered on the user’s palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.


international symposium on mixed and augmented reality | 2011

Bare-hand-based augmented reality interface on mobile phone

Junyeong Choi; Hanhoon Park; Jungsik Park; Jong-Il Park

This paper proposes an augmented reality interface that provides natural hand-based interaction with virtual objects on mobile phones. Assume that one holds a mobile phone in a hand and sees the other hand through mobile phones camera. Then, a virtual object is rendered on his/her palm and reacts to hand and finger movements. Since the proposed interface does not require any additional sensors or markers, one freely interacts with the virtual object anytime and anywhere. The proposed interface worked at 5 fps on a mobile phone (Galaxy S2 having a dual-core processor).


international symposium on robotics | 2013

RGB-D camera-based hand shape recognition for human-robot interaction

Junyeong Choi; Byung-Kuk Seo; Daeseon Lee; Hanhoon Park; Jong-Il Park

Hand is the most popularly used tool for human-robot interaction. Therefore, this paper proposes a Kinect-based hand shape recognition method for human-robot interaction. Kinect can capture color and depth images simultaneously and its SDK provides functions to track the human skeleton. Therefore, the proposed method can detect hands robustly by using the skeleton and depth information. In results, it can recognize various hand shapes based on contour analysis with a high recognition rate (95% on average) and works in real-time (over 30 frames/sec).


virtual reality continuum and its applications in industry | 2011

User-created marker based on character recognition for intuitive augmented reality interacion

Seiheui Han; Eun Joo Rhee; Junyeong Choi; Jong-Il Park

This paper proposes a novel concept of markers with alphabet combinations for Augmented Reality (AR) applications. Compared to traditional markers with square patterns, the proposed markers composed of alphabets have several advantages for interaction. The proposed markers are based on English alphabet letters, allowing users to interact intuitively by combining the alphabetized markers to from words. The marker recognition system operates at a robust 58.8 fps, recognizing the markers in a fast and effective method. In this paper, we verify the effectiveness of the proposed markers by implementing an AR application.


Journal of Broadcast Engineering | 2011

Implementation of Hand-Gesture-Based Augmented Reality Interface on Mobile Phone

Junyeong Choi; Hanhoon Park; Jungsik Park; Jong-Il Park

With the recent advance in the performance of mobile phones, many effective interfaces for them have been proposed. This paper implements a hand-gesture-and-vision-based interface on a mobile phone. This paper assumes natural interaction scenario when user holds a mobile phone in a hand and sees the other hand`s palm through mobile phone`s camera. Then, a virtual object is rendered on his/her palm and reacts to hand and finger movements. Since the implemented interface is based on hand familiar to humans and does not require any additional sensors or markers, user freely interacts with the virtual object anytime and anywhere without any training. The implemented interface worked at 5 fps on mobile phone (Galaxy S2 having a dual-core processor).


Optical Engineering | 2016

Twenty-one degrees of freedom model based hand pose tracking using a monocular RGB camera

Junyeong Choi; Jong-Il Park; Hanhoon Park

Abstract. It is difficult to visually track a user’s hand because of the many degrees of freedom (DOF) a hand has. For this reason, most model-based hand pose tracking methods have relied on the use of multiview images or RGB-D images. This paper proposes a model-based method that accurately tracks three-dimensional hand poses using monocular RGB images in real time. The main idea of the proposed method is to reduce hand tracking ambiguity by adopting a step-by-step estimation scheme consisting of three steps performed in consecutive order: palm pose estimation, finger yaw motion estimation, and finger pitch motion estimation. In addition, this paper proposes highly effective algorithms for each step. With the assumption that a human hand can be considered as an assemblage of articulated planes, the proposed method uses a piece-wise planar hand model which enables hand model regeneration. The hand model regeneration modifies the hand model to fit the current user’s hand and improves the accuracy of the hand pose estimation results. Above all, the proposed method can operate in real time using only CPU-based processing. Consequently, it can be applied to various platforms, including egocentric vision devices such as wearable glasses. The results of several experiments conducted verify the efficiency and accuracy of the proposed method.

Collaboration


Dive into the Junyeong Choi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanhoon Park

Pukyong National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Us Jung

Sacred Heart Hospital

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge