Youngkyoo Hwang
Samsung
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Youngkyoo Hwang.
symposium on computer animation | 2011
Taehyun Rhee; Youngkyoo Hwang; James D. K. Kim; Changyeong Kim
This paper describes a complete pipe-line of a practical system for producing real-time facial expressions of a 3D virtual avatar controlled by an actors live performances. The system handles practical challenges arising from markerless expression captures from a single conventional video camera. For robust tracking, a localized algorithm constrained by belief propagation is applied to the upper face, and an appearance matching technique using a parameterized generic face model is exploited for lower face and head pose tracking. The captured expression features then transferred to high dimensional 3D animation controls using our facial expression space which is a structure-preserving map between two algebraic structures. The transferred animation controls drive facial animation of a 3D avatar while optimizing the smoothness of the face mesh. An example-based face deformation technique produces non-linear local detail deformations on the avatar that are not captured in the movement of the animation controls.
multimedia signal processing | 2010
Seungju Han; Jae-Joon Han; Youngkyoo Hwang; Jung-Bae Kim; Won-chul Bang; James D. K. Kim; Chang-Yeong Kim
The recent online networked virtual worlds such as SecondLife, World of Warcraft and Lineage have been increasingly popular. A life-scale virtual world presentation and the intuitive interaction between the users and the virtual worlds would provide more natural and immersive experience for users. The emergence of novel interaction technologies such as sensing the facial expression and the motion of the users and the real world environments could be used to provide a strong connection between them. For the wide acceptance and use of the virtual world, a various type of novel interaction devices should have a unified interaction formats between the real world and the virtual world and interoperability among virtual worlds. Thus, MPEG-V Media Context and Control (ISO/IEC 23005) standardizes such connecting information. The paper provides an overview and its usage example of MPEG-V from the real world to the virtual world (R2V) on interfaces for controlling avatars and virtual objects in the virtual world by the real world devices. In particular, we investigate how the MPEG-V framework can be applied for the facial animation of an avatar in various types of virtual worlds.
internaltional ultrasonics symposium | 2012
Youngkyoo Hwang; Jung-Bae Kim; Won-chul Bang; James D. K. Kim; Chang-Yeong Kim; Heesae Lee
Respiratory motion monitoring is important to deal with tumor tracking in radiotherapy and imaging. In this paper, we propose a system that measures directly the motion of organ itself only using ultrasound image. Our system locates robustly diaphragm even though diaphragm may be blurred in ultrasound images by a fitting technique. In addition, it can extract robustly respiratory signal by automatically selecting the best area to monitor respiratory area in all image sequences.
Proceedings of SPIE | 2012
Youngkyoo Hwang; Jung-Bae Kim; Yong Sun Kim; Won-chul Bang; James D. K. Kim; Chang-Yeong Kim
Respiratory motion tracking has been issues for MR/CT imaging and noninvasive surgery such as HIFU and radiotherapy treatment when we apply these imaging or therapy technologies to moving organs such as liver, kidney or pancreas. Currently, some bulky and burdensome devices are placed externally on skin to estimate respiratory motion of an organ. It estimates organ motion indirectly using skin motion, not directly using organ itself. In this paper, we propose a system that measures directly the motion of organ itself only using ultrasound image. Our system has automatically selected a window in image sequences, called feature window, which is able to measure respiratory motion robustly even to noisy ultrasound images. The organs displacement on each ultrasound image has been directly calculated through the feature window. It is very convenient to use since it exploits a conventional ultrasound probe. In this paper, we show that our proposed method can robustly extract respiratory motion signal with regardless of reference frame. It is superior to other image based method such as Mutual Information (MI) or Correlation Coefficient (CC). They are sensitive to what the reference frame is selected. Furthermore, our proposed method gives us clear information of the phase of respiratory cycle such as during inspiration or expiration and so on since it calculate not similarity measurement like MI or CC but actual organs displacement.
medical image computing and computer assisted intervention | 2013
Zhihui Hao; Qiang Wang; Xiaotao Wang; Jung-Bae Kim; Youngkyoo Hwang; Baek Hwan Cho; Ping Guo; Won Ki Lee
A key problem for many medical image segmentation tasks is the combination of different-level knowledge. We propose a novel scheme of embedding detected regions into a superpixel based graphical model, by which we achieve a full leverage on various image cues for ultrasound lesion segmentation. Region features are mapped into a higher-dimensional space via a boosted model to become well controlled. Parameters for regions, superpixels and a new affinity term are learned simultaneously within the framework of structured learning. Experiments on a breast ultrasound image data set confirm the effectiveness of the proposed approach as well as our two novel modules.
Proceedings of SPIE | 2013
Xuetao Feng; Xiaolu Shen; Qiang Wang; Jung-Bae Kim; Zhihui Hao; Youngkyoo Hwang; Won-chul Bang; James D. K. Kim; Jiyeun Kim
Automatic segmentation of anatomical structure is crucial for computer aided diagnosis and image guided online treatment. In this paper, we present a novel approach for fully automatic segmentation of all anatomical structures from a target liver organ in a coherent framework. Firstly, all regional anatomical structures such as vessel, tumor, diaphragm and liver parenchyma are detected simultaneously using random forest classifiers. They share the same feature set and classification procedure. Secondly, an efficient region segmentation algorithm is used to obtain the precise shape of these regional structures. It is based on level set with proposed active set evolution and multiple features handling which achieves 10 times speedup over existing algorithms. Thirdly, the liver boundary curve is extracted via a graph-based model. The segmentation results of regional structures are incorporated into the graph as constraints to improve the robustness and accuracy. Experiment is carried out on an ultrasound image dataset with 942 images captured with liver motion and deformation from a number of different views. Quantitative results demonstrate the efficiency and effectiveness of the proposed algorithm.
international conference on consumer electronics | 2013
Xiaolu Shen; Xuetao Feng; Jung-Bae Kim; Hui Zhang; Youngkyoo Hwang; Jiyeun Kim
Facial motion tracking is a challenging task because of highly flexible head pose and facial expression. An extensible tracking framework is proposed in this paper. Within the framework, proper models are selected according to requirements and restrictions of the application, and different trackers can be constructed to handle different tasks. Experimental result shows that our tracker outperforms the existing commercial software.
Proceedings of SPIE | 2013
Young-Taek Oh; Youngkyoo Hwang; Jung-Bae Kim; Won-chul Bang; James D. K. Kim; Chang Yeong Kim
We present a new method for patient-specific liver deformation modeling for tumor tracking. Our method focuses on deforming two main blood vessels of the liver – hepatic and portal vein – to utilize them as features. A novel centerline editing algorithm based on ellipse fitting is introduced for vessel deformation. Centerline-based blood vessel model and various interpolation methods are often used for generating a deformed model at the specific time t. However, it may introduce artifacts when models used in interpolation are not consistent. One of main reason of this inconsistency is the location of bifurcation points differs from each image. To solve this problem, our method generates a base model from one of patient’s CT images. Next, we apply a rigid iterative closest point (ICP) method to the base model with centerlines of other images. Because the transformation is rigid, the length of each vessel’s centerline is preserved while some part of the centerline is slightly deviated from centerlines of other images. We resolve this mismatch using our centerline editing algorithm. Finally, we interpolate three deformed models of liver, blood vessels, tumor using quadratic B´ezier curves. We demonstrate the effectiveness of the proposed approach with the real patient data.
internaltional ultrasonics symposium | 2012
Jung-Bae Kim; Youngkyoo Hwang; Won-chul Bang; James D. K. Kim; Chang-Yeong Kim
Image-guided therapy is the treatment technology which uses medical images to understand anatomic information on human body and make medical treatment plan. Such treatment includes radio frequency (RF) ablation, high intensity focused ultrasound (HIFU), cyber knife, surgical robot, etc. Especially, ultrasound (US) guided HIFU therapy has attracted much attention since it treats internal organs without giving any radiation dose to human body. However, it is still challengeable to track 3D position of tumors of moving organ from 2D us videos since respiration makes change on the position and shape of the internal organs such as liver, kidney or pancreas. The paper proposes a moving organ tracking method based on 3D organ model. The 3D organ model is made from MR or CT images captured before treatment. The 3D organ model is registered to US images, and the organ and its tumors position are tracked in 2D US videos in real time. We apply the method on a liver phantom and get 5.3ms as the processing time and 1.3 mm as the accuracy.
machine vision applications | 2014
Young-Taek Oh; Youngkyoo Hwang; Jung-Bae Kim; Won-chul Bang
Adaptive thresholding is a useful technique for document analysis. In medical image processing, it is also helpful for segmenting structures, such as diaphragms or blood vessels. This technique sets a threshold using local information around a pixel, then binarizes the pixel according to the value. Although this technique is robust to changes in illumination, it takes a significant amount of time to compute thresholds because it requires adding all of the neighboring pixels. Integral images can alleviate this overhead; however, medical images, such as ultrasound, often come with image masks, and ordinary algorithms often cause artifacts. The main problem is that the shape of the summing area is not rectangular near the boundaries of the image mask. For example, the threshold at the boundary of the mask is incorrect because pixels on the mask image are also counted. Our key idea to cope with this problem is computing the integral image for the image mask to count the valid number of pixels. Our method is implemented on a GPU using CUDA, and experimental results show that our algorithm is 164 times faster than a naïve CPU algorithm for averaging.