Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun-Cheol Park is active.

Publication


Featured researches published by Jun-Cheol Park.


international conference on ubiquitous information management and communication | 2016

A Real-time Facial Expression Recognizer using Deep Neural Network

Jinwoo Jeon; Jun-Cheol Park; Youngjoo Jo; Changmo Nam; Kyung-Hoon Bae; Youngkyoo Hwang; Dae-Shik Kim

With increasing boundaries of deep learning based recognition model, particular demands for real-time user state recognizer targeting smart home devices have emerged. This paper suggests real-time facial expression recognizer to satisfy them. We use HOG feature descriptor to detect a human face, correlation tracker to track detected face and deep Convolutional Neural Network (CNN) based recognizer on our model. Our CNN model is trained and tested with Kaggle facial expression recognition challenge dataset. The experimental result shows that high test accuracy and low computation time are achieved by our recognizer enabling real-time high-performance human expression recognition for mobile use.


Frontiers in Psychology | 2012

Predictive coding strategies for developmental neurorobotics

Jun-Cheol Park; Jae Hyun Lim; Hansol Choi; Dae-Shik Kim

In recent years, predictive coding strategies have been proposed as a possible means by which the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal beliefs to the observed error between what it expects to happen and what actually happens. In this paper, we present a variety of developmental neurorobotics experiments in which minimalist prediction error-based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning, such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions.


simulation of adaptive behavior | 2014

Developmental Dynamics of RNNPB: New Insight about Infant Action Development

Jun-Cheol Park; Dae-Shik Kim; Yukie Nagai

Developmental studies have suggested that infants’ action is goal-directed. When imitating an action, younger infants tend to reproduce the goal while ignoring the means (i.e., the movement to achieve the goal) whereas older infants can imitate both. We suggest that the developmental dynamics of a Recurrent Neural Network with Parametric Bias (RNNPB) may explain the mechanism of infant development. Our RNNPB model was trained to reproduce six types of actions (2 different goals x 3 different means), during which parametric biases were self-organized to represent the difference with respect to both the goal and means. Our analysis of the self-organizing process of the parametric biases revealed an infant-like developmental change in action learning: the RNNPB first adapted to the goal and then to the means. The different saliency of these two features caused this phased development. We discuss the analogy of our result to infant action development.


IEEE Transactions on Cognitive and Developmental Systems | 2018

Learning for Goal-Directed Actions Using RNNPB: Developmental Change of “What to Imitate”

Jun-Cheol Park; Dae-Shik Kim; Yukie Nagai

“What to imitate” is one of the most important and difficult issues in robot imitation learning. A possible solution from an engineering approach involves focusing on the salient properties of actions. We investigate the developmental change of what to imitate in robot action learning in this paper. Our robot is equipped with a recurrent neural network with parametric bias (RNNPB), and learned to imitate multiple goal-directed actions in two different environments (i.e., simulation and real humanoid robot). Our close analysis of the error measures and the internal representation of the RNNPB revealed that actions’ most salient properties (i.e., reaching the desired end of motor trajectories) were learned first, while the less salient properties (i.e., matching the shape of motor trajectories) were learned later. Interestingly, this result was analogous to the developmental process of human infant’s action imitation. We discuss the importance of our results in terms of understanding the underlying mechanisms of human development.


international conference on ubiquitous information management and communication | 2016

Indoor Human Activity Recognition with Contextual Cues in Videos

Changmo Nam; Jun-Cheol Park; Dae-Shik Kim

Nowadays researchers have been trying to recognize human activity using human joint information in RGB-D human activity dataset. However, it is necessary to recognize objects interacting with a human to improve human activity classification accuracy because an object is able to give us an important information to classify human activity. In this paper, we describe the effective way of detecting object region related to human activity. We used human joint information and its movement properties. The experimental result shows that object region interacting with a human is detected well by our proposed method.


international conference on ubiquitous information management and communication | 2016

A Real-time Object Tracker equipped with Deep Object Recognizer

Youngjoo Jo; Jun-Cheol Park; Jinwoo Jeon; Changmo Nam; Junghee Han; Yongin Park; Dae-Shik Kim

A novel system for intelligent visual tracking focus on the unknown object tracking in short or mid-term video stream of the object in unconstrained environment. Also it commonly focuses only few kind of objects for doing definite programmed mission like moving the object or opening the door. It means that many vision tracking did not consider the general category of the object. For more intelligent visual tracking system, long-term robust unknown object tracking and recognizer is important. We propose the real-time intelligent tracking system equipped with recognizer operating on long-term situation by using novel tracking framework Tracking-Learning-Detection (TLD) and deep convolutional neural network. The system is validated on long-term surveillance to tracking and recognizing object.


international conference on control and automation | 2016

Structured output tracking with deep neural network and optical flow

Youngjoo Jo; Jun-Cheol Park; Dae-Shik Kim

The deep learning of neural network works on vision recognition and classification tasks briskly, and it can extract great features of an image for classification. Recently, many approaches have studied the visual tracking in two-ways with these characteristics. First, they can regard tracking problem as classifying each video and frame by learning all dataset. Second, use the deep neural network as feature generator and use other classifiers for using their features such as Support Vector Machine (SVM). On the second part, the features can be used to learn discriminative target appearance models like online SVM. We propose an adaptive visual tracking framework based on structured output SVM learning method with Convolution Neural Network (CNN) features and Median tracking method. Our framework uses a kernelized structured output SVM with CNN features which is learned online to provide adaptive tracking and combined with Median tracker. It can generate fickle online learning data and handle it for getting various features. The proposed framework is tested with state-of-the-art trackers on existing tracking benchmarks.


international conference on computer graphics and interactive techniques | 2013

Applications of projection based interactive user interface

Jun-Cheol Park; Yunhun Jang; Hyungwon Choi; Sihyeon Seong; Sunghun Kang; Dae-Shik Kim

Recently, there were various approaches based on the interactive projection system as a user interface due to its broad applicability to projection surfaces[Wilson 2005][Cao et al. 2007]. Among projection surfaces, whiteboard is regarded as the most appropriate media to interact with people by writing or drawing. Throughout this work, we propose an interactive user interface system and some applications for whiteboard based on the projector. This system supports interactions by hand-drawing detection and simple hand-gesture recognition(moving or clicking).


international symposium on neural networks | 2012

Learning spatio-temporally invariant representations from video

Jae Hyun Lim; Hansol Choi; Jun-Cheol Park; Jae Young Jun; Dae-Shik Kim

Learning invariant representations of environments through experience has been important area of research both in the field of machine learning as well as in computational neuroscience. In the present study, we propose a novel unsupervised method for the discovery of invariants from a single video input based on the learning of the spatio-temporal relationship of inputs. In an experiment, we tested the learning of spatio-temporal invariant features from a single video that involves rotational movements of faces of several subjects. From the results of this experiment, we demonstrate that the proposed system for the learning of invariants based on spatio-temporal continuity can be used as a compelling unsupervised method for learning invariants from an input that includes temporal information.


international conference on neural information processing | 2012

Apparent volitional behavior selection based on memory predictions

Jun-Cheol Park; Jae Hyeon Yoo; Juhyeon Lee; Dae-Shik Kim

Volitional movement is a hallmark for human behavior. How such well-intended concatenation of behaviors is achieved remains, however, elusive. In the present study, we hypothesized that visual memory of past motion trajectories may be used for selecting future behavior. Based on our memory prediction hypothesis, we designed motor planning experiments that generate new path when given a fixed goal by using only visual memories of past motor trajectories. We conducted simulation experiments and applied the motion planning algorithm for a humanoid robot. The results of our studies suggest that new motor trajectory for a fixed goal can be generated on learned visual memories of past behaviors.

Collaboration


Dive into the Jun-Cheol Park's collaboration.

Researchain Logo
Decentralizing Knowledge