Jung-Bae Kim
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jung-Bae Kim.
ieee international conference on fuzzy systems | 2002
Jung-Bae Kim; Kwang-Hyun Park; Won-Chul Bang; Z. Zenn Bien
Reports some early results of our study on continuous Korean sign language (KSL) recognition using color vision. In recognizing gesture words such as sign language, it is very difficult to segment a continuous sign into individual sign words since the patterns are very complicated and diverse. To solve this problem, we disassemble the KSL into 18 hand motion classes according to their patterns and represent the sign words as some combination of hand motions. Observing the speed and the change of speed of hand motion and using fuzzy partitioning and state automata, we reject unintentional gesture motions such as preparatory motion and meaningless movement between sign words. To recognize 18 hand motion classes we adopt the hidden Markov model. Using these methods, we recognize 15 KSL sentences and obtain 94% recognition ratio.
medical image computing and computer assisted intervention | 2013
Zhihui Hao; Qiang Wang; Xiaotao Wang; Jung-Bae Kim; Youngkyoo Hwang; Baek Hwan Cho; Ping Guo; Won Ki Lee
A key problem for many medical image segmentation tasks is the combination of different-level knowledge. We propose a novel scheme of embedding detected regions into a superpixel based graphical model, by which we achieve a full leverage on various image cues for ultrasound lesion segmentation. Region features are mapped into a higher-dimensional space via a boosted model to become well controlled. Parameters for regions, superpixels and a new affinity term are learned simultaneously within the framework of structured learning. Experiments on a breast ultrasound image data set confirm the effectiveness of the proposed approach as well as our two novel modules.
systems man and cybernetics | 2000
Jong-Sung Kim; Kwang-Hyun Park; Jung-Bae Kim; Jun-Hyeong Do; Kyung-Joon Song; Zeungnam Bien
The authors present a real time hand gesture recognition system that controls the motion of a human avatar based on predefined dynamic hand gestures in a virtual environment. First, we note that recognition of a distinction between the start and end of a motion is needed and a lot of time is spent on learning in conventional recognition systems. To resolve this problem, we propose a recognition method using intelligent techniques. Then, we also present an obstacle-free path generation method to ensure that an avatar can navigate, avoiding obstacles in a virtual environment.
international conference on consumer electronics | 2014
Youngkyoo Hwang; Young-Taek Oh; Jung-Bae Kim; Won-chul Bang
It is necessary to use an ultrasound imaging devices for Radio Frequency Ablation (RFA) of cancers or biopsy on liver, prostate, thyroid, and so on. However, it is very hard to discriminate a small tumor and its surrounding tissues when the tumor has less than a diameter of 1cm. Thus physician refers previously acquired diagnostic MR or CT images during those procedures in common. Furthermore, some medical devices manufacturing companies have released a ultrasound imaging device with fusion imaging technologies which can simultaneously display the same images together the live ultrasound image with the same patients MR or CT. However, they needs many manual inputs. It has manually registered the coordinates of both MR and ultrasound as the physician needs to manually specify several corresponding points on both MR images and US images. In this paper, we introduced fusion imaging system via just one-click interaction for liver MR to ultrasound image which is the majority of the RFA or biopsy.
international conference on consumer electronics | 2013
Jung-Bae Kim; Youngkyoo Hwang; Won-chul Bang; Heesae Lee; James D. K. Kim; Chang-Yeong Kim
This paper suggests a novel technology that can clone a users facial expression to his avatar in 3D virtual worlds realistically using only one color camera on a smart TV. To do this, the users 3D head movement information and 3D position of facial feature points are needed in real time. We propose two novel approaches to achieve this. Firstly, we use a personalized 3D and 2D facial expression model to deal with head movement and various expressions. Secondly, we use a facial muscle model to generate natural motion of facial feature points located in cheeks and forehead which are difficult to be tracked using a camera. Experimental results demonstrate that the proposed method would be an efficient technique to perform realistic 3D facial expression cloning.
Archive | 2003
Z. Zenn Bien; Jun-Hyeong Do; Jung-Bae Kim; Dimitar Stefanov; Kwang-Hyun Park
International Journal of Assistive Robotics and Mechatronics | 2002
Jun-Hyeong Do; Jung-Bae Kim; Kwang-Hyun Park; Won-Chul Bang; Z. Zenn Bien
제어로봇시스템학회 국제학술대회 논문집 | 2001
Jung-Bae Kim; Kwang-Hyun Park; Won-Chul Bang; Jong-Sung Kim; Z. Zenn Bien
IEICE Transactions on Information and Systems | 2004
Jung-Bae Kim; Zeungnam Bien
Archive | 2012
Youngkyoo Hwang; Jung-Bae Kim; Yong-Sun Kim; Won-chul Bang; Do-kyoon Kim