Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chankyu Park is active.

Publication


Featured researches published by Chankyu Park.


workshops on enabling technologies: infrastracture for collaborative enterprises | 2004

Transformation algorithms between BPEL4WS and BPML for the executable business process

Jinyoung Moon; Daeha Lee; Chankyu Park; Hyunkyu Cho

Web services is an emerging technology of distributed computing as an interoperable means integrating loosely coupled Web applications. The current basic Web services standards SOAP, WSDL and UDDI are not sufficient to fully support the complete business process. To support the business process, several specifications related to the Web services composition, such as BPEL4WS, WSCI, and BPML, were suggested and have been developed in several standard bodies under the support of major vendors. Among them, BPEL4WS and BPML may be used to describe an executable business process used in the internal enterprise system. In this paper, we suggest the transformation algorithms between BPEL4WS and BPML to enhance the interoperability. For example, a BPEL4WS implementation system may use the transformation algorithm from BPML into BPEL4WS in order to refer to a BPML-formatted executable business process. Conversely, the system may use the transformation algorithm from BPEL4WS into BPML in order to export the BPML-formatted document describing its BPEL4WS-based business process instance.


systems, man and cybernetics | 2013

Multi-view Facial Expression Recognition Using Parametric Kernel Eigenspace Method Based on Class Features

Woo-han Yun; Dohyung Kim; Chankyu Park; Jaehong Kim

Automatic facial expression recognition is an important technique for interaction between humans and machines such as robots or computers. In particular, pose invariant facial expression recognition is needed in an automatic facial expression system because frontal faces are not always visible in real situations. The present paper introduces a multi-view method for recognizing facial expressions using a parametric kernel eigenspace method based on class features (pKEMC). We first describe pKEMC that finds the manifold of data patterns in each class on a non-linear discriminant subspace for separating multiple classes. Then, we apply pKEMC for pose-invariant facial expression recognition. We also utilize facial-component-based representation to improve the robustness to pose variation. We carried out the validation of our method on a Multi-PIE database. The results show that our method has high discrimination accuracy and provides an effective means to recognize multi-view facial expressions.


Ksii Transactions on Internet and Information Systems | 2011

A Wrist-Type Fall Detector with Statistical Classifier for the Elderly Care

Chankyu Park; Jaehong Kim; Joochan Sohn; Ho-Jin Choi

Falls are one of the most concerned accidents for elderly people and often result in serious physical and psychological consequences. Many researchers have studied fall detection techniques in various domain, however none released to a commercial product satisfying user requirements. We present a systematic modeling and evaluating procedure for best classification performance and then do experiments for comparing the performance of six procedures to get a statistical classifier based wrist-type fall detector to prevent dangerous consequences from falls. Even though the wrist may be the most difficult measurement location on the body to discern a fall event, the proposed feature deduction process and fall classification procedures shows positive results by using data sets of fall and general activity as two classes.


international conference on machine vision | 2015

Measuring the engagement level of children for multiple intelligence test using Kinect

Dongjin Lee; Woo-han Yun; Chankyu Park; Ho-Sub Yoon; Jaehong Kim; Cheong Hee Park

In this paper, we present an affect recognition system for measuring the engagement level of children using the Kinect while performing a multiple intelligence test on a computer. First of all, we recorded 12 children while solving the test and manually created a ground truth data for the engagement levels of each child. For a feature extraction, Kinect for Windows SDK provides support for a user segmentation and skeleton tracking so that we can get 3D joint positions of an upper-body skeleton of a child. After analyzing movement of children, the engagement level of children’s responses is classified into two classes: High or Low. We present the classification results using the proposed features and identify the significant features in measuring the engagement.


robot and human interactive communication | 2013

Real-time user pose verification in a depth image for simulator

Dongjin Lee; Kye Kyung Kim; Chankyu Park; Ho-Sub Yoon; Jaehong Kim; Cheong Hee Park

We propose a user pose verification technique using depth information to compare their pose with expert riders. Xtion sensor by Asus is used for gathering a depth data in real world. The user pose verification algorithm is divided into two categories: user segmentation and user pose verification. In user segmentation step, body parts are segmented based on the region growing algorithm from a head point (i.e. seed point). Then, a simple algorithm is used to generate skeletal joints in the segmented body parts. Finally, the user pose is investigated by the standard pose of experts in the same situation.


international conference on ubiquitous robots and ambient intelligence | 2015

Analysis of children's posture for the bodily kinesthetic test

Dongjin Lee; Woo-han Yun; Chankyu Park; Ho-Sub Yoon; Jaehong Kim

In this paper, we present a childrens posture analysis system using the Kinect while they are performing the yoga poses with the Nao robot. We collected posture data from eighteen children who were kindergaten kids and their posture accuracy was annotated by an expert into four different levels. For a feature extraction, we apply a method of an angular skeleton representation in every frame and those features are aggregated with several different functions over the entire sequence of each pose. We show results of the posture classification and analysis.


enterprise distributed object computing | 2003

ebXML BP modeling toolkit

Jinyoung Moon; Daeha Lee; Chankyu Park; Hyunkyu Cho

Collaboration in a business system requires a business process specification defining the procedure of the business scenario. The business process specification is generated from a business process model. ebXML, which is the XML-based B2B standard framework for organizations of any size using the Internet, recommends process analysts and modelers to use the UN/CEFACT (United Nations Centre for the Facilitation of Procedures and Practices for Administration, Commerce and Transport) Modeling Methodology (UMM). The artifacts of the modeling are UML (Unified Modeling Language) diagrams and worksheets. They can be transformed into an ebXML business process (BP) specification and other business models. The artifacts and transformed results are registered in the business library for being shared with ebXML systems and being re-used in other modeling tools. This paper reports on our implementation of an ebXML BP modeling toolkit that accepts the architecture suggested in the ebXML and considers the required functions of modeling. These functions include modeling business processes based on UMM, generating a BP specification, transforming the processes using a metaframework, and registering them in the ebXML registry. The business process modeling toolkit is made up of the business process modeler, business process editor, and built-in registry client. The business process modeler not only models the business process with UML diagrams but also generates the business process specification, reverses it, and exports XMI of the business process. The business process editor is used only for editing the business process specification. The built-in registry client stores the business process model, the business process specification, or the XMI document, and searches and loads them.


international conference on ubiquitous robots and ambient intelligence | 2015

Effective methods to extract PPG signals from face using stochastic state space modeling approach

Chankyu Park; Ho-jin Choi

The Photoplethymography(PPG) which means heart-rate pulsation is generally measured on a finger or an ear using contact sensors. The recent several studies using web-cam to measure PPG have been introduced under the desktop or mobile computing environment including service robot . However the motion artifact issue is also emerging in non-contact camera sensing similar to contact-type one because it is sensitive to artifacts generated by subjects head and body motion. In this paper, The effective methods to extract PPG signals from face is introduced by stochastic state space modeling(SSM) approach and its system parameters are estimated by subspace system identification.


international conference on ubiquitous robots and ambient intelligence | 2015

Markerless human body pose estimation from consumer depth cameras for simulator

Dongjin Lee; Chankyu Park; Suyoung Chi; Ho-Sub Yoon; Jaehong Kim

In recent years, many studies have shown that horse riding exercises have positive effects on promoting both physical and psychological health. To maximize the effects, the correct posture is essential when riding a horse. Therefore, the purpose of this study is to present an algorithm for estimating a human pose from depth data while riding a horse simulator. This estimated information can be used for analyzing the riders posture. The proposed rider pose estimation algorithm is divided into four steps: (1) head detection, (2) body part segmentation, (3) joint position prediction, and (4) updating the joint positions. Each step is dependent on the previous step being completed successfully. We compared the experiment results between our joint prediction algorithm and ground truth data to show the performance of the proposed methodology.


Seventh International Conference on Graphic and Image Processing (ICGIP 2015) | 2015

Distance-invariant automatic engagement level recognition using visual cues

Woo-han Yun; Dongjin Lee; Chankyu Park; Jaehong Kim

In a camera-based engagement level recognition, a face is an important factor because cues mainly come from a face, which is affected from a distance between a camera and a user. In this paper, we present an automatic engagement level recognition method showing stable performance regardless of a distance between a camera and a user. We show a detailed process about getting a distance-invariant cue and compare its performance with and without the process. We also adopt a temporal pyramid structure to extract temporal statistical feature and present a voting method for an engagement level estimation. We show the results and the analysis using the database acquired in the real environment.

Collaboration


Dive into the Chankyu Park's collaboration.

Top Co-Authors

Avatar

Jaehong Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Joo Chan Sohn

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Dongjin Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Woo-han Yun

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ho-Sub Yoon

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Daeha Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Do Hyung Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ho-Jin Choi

Information and Communications University

View shared research outputs
Top Co-Authors

Avatar

Hyun Kyu Cho

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jae Yeon Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge