Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heechul Jung is active.

Publication


Featured researches published by Heechul Jung.


international conference on computer vision | 2015

Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition

Heechul Jung; Sihaeng Lee; Junho Yim; Sunjeong Park; Junmo Kim

Temporal information has useful features for recognizing facial expressions. However, to manually design useful features requires a lot of effort. In this paper, to reduce this effort, a deep learning technique, which is regarded as a tool to automatically extract useful features from raw data, is adopted. Our deep network is based on two different models. The first deep network extracts temporal appearance features from image sequences, while the other deep network extracts temporal geometry features from temporal facial landmark points. These two models are combined using a new integration method in order to boost the performance of the facial expression recognition. Through several experiments, we show that the two models cooperate with each other. As a result, we achieve superior performance to other state-of-the-art methods in the CK+ and Oulu-CASIA databases. Furthermore, we show that our new integration method gives more accurate results than traditional methods, such as a weighted summation and a feature concatenation method.


computer vision and pattern recognition | 2015

Rotating your face using multi-task deep neural network

Junho Yim; Heechul Jung; ByungIn Yoo; Changkyu Choi; Du-sik Park; Junmo Kim

Face recognition under viewpoint and illumination changes is a difficult problem, so many researchers have tried to solve this problem by producing the pose- and illumination- invariant feature. Zhu et al. [26] changed all arbitrary pose and illumination images to the frontal view image to use for the invariant feature. In this scheme, preserving identity while rotating pose image is a crucial issue. This paper proposes a new deep architecture based on a novel type of multitask learning, which can achieve superior performance in rotating to a target-pose face image from an arbitrary pose and illumination image while preserving identity. The target pose can be controlled by the users intention. This novel type of multi-task model significantly improves identity preservation over the single task model. By using all the synthesized controlled pose images, called Controlled Pose Image (CPI), for the pose-illumination-invariant feature and voting among the multiple face recognition results, we clearly outperform the state-of-the-art algorithms by more than 4~6% on the MultiPIE dataset.


computer vision and pattern recognition | 2014

Rigid Motion Segmentation Using Randomized Voting

Heechul Jung; Jeongwoo Ju; Junmo Kim

In this paper, we propose a novel rigid motion segmentation algorithm called randomized voting (RV). This algorithm is based on epipolar geometry, and computes a score using the distance between the feature point and the corresponding epipolar line. This score is accumulated and utilized for final grouping. Our algorithm basically deals with two frames, so it is also applicable to the two-view motion segmentation problem. For evaluation of our algorithm, Hopkins 155 dataset, which is a representative test set for rigid motion segmentation, is adopted, it consists of two and three rigid motions. Our algorithm has provided the most accurate motion segmentation results among all of the state-of-the-art algorithms. The average error rate is 0.77%. In addition, when there is measurement noise, our algorithm is comparable with other state-of-the-art algorithms.


ieee intelligent vehicles symposium | 2013

An efficient lane detection algorithm for lane departure detection

Heechul Jung; Junggon Min; Junmo Kim

In this paper, we propose an efficient lane detection algorithm for lane departure detection; this algorithm is suitable for low computing power systems like automobile black boxes. First, we extract candidate points, which are support points, to extract a hypotheses as two lines. In this step, Haar-like features are used, and this enables us to use an integral image to remove computational redundancy. Second, our algorithm verifies the hypothesis using defined rules. These rules are based on the assumption that the camera is installed at the center of the vehicle. Finally, if a lane is detected, then a lane departure detection step is performed. As a result, our algorithm has achieved 90.16% detection rate; the processing time is approximately 0.12 milliseconds per frame without any parallel computing.


Revista De Informática Teórica E Aplicada | 2015

Image Classification Using Convolutional Neural Networks With Multi-stage Feature

Junho Yim; Jeongwoo Ju; Heechul Jung; Junmo Kim

Convolutional neural networks (CNN) have been widely used in automatic image classification systems. In most cases, features from the top layer of the CNN are utilized for classification; however, those features may not contain enough useful information to predict an image correctly. In some cases, features from the lower layer carry more discriminative power than those from the top. Therefore, applying features from a specific layer only to classification seems to be a process that does not utilize learned CNN’s potential discriminant power to its full extent. This inherent property leads to the need for fusion of features from multiple layers. To address this problem, we propose a method of combining features from multiple layers in given CNN models. Moreover, already learned CNN models with training images are reused to extract features from multiple layers. The proposed fusion method is evaluated according to image classification benchmark data sets, CIFAR-10, NORB, and SVHN. In all cases, we show that the proposed method improves the reported performances of the existing models by 0.38%, 3.22% and 0.13%, respectively.


korea japan joint workshop on frontiers of computer vision | 2015

Development of deep learning-based facial expression recognition system

Heechul Jung; Sihaeng Lee; Sunjeong Park; Byungju Kim; Junmo Kim; Injae Lee; Chung-Hyun Ahn

Deep learning is considered to be a breakthrough in the field of computer vision, since most of the world records of the recognition tasks are being broken. In this paper, we try to apply such deep learning techniques to recognizing facial expressions that represent human emotions. The procedure of our facial expression recognition system is as follows: First, face is detected from input image using Haar-like features. Second, the deep network is used for recognizing facial expression using detected faces. In this step, two different deep networks can be used such as deep neural network and convolutional neural network. Consequently, we compared experimentally two types of deep networks, and the convolutional neural network had better performance than deep neural network.


computer vision and pattern recognition | 2017

ResNet-Based Vehicle Classification and Localization in Traffic Surveillance Systems

Heechul Jung; Min-Kook Choi; Jihun Jung; Jinhee Lee; Soon Kwon; Woo Young Jung

In this paper, we present ResNet-based vehicle classification and localization methods using real traffic surveillance recordings. We utilize a MIOvision traffic dataset, which comprises 11 categories including a variety of vehicles, such as bicycle, bus, car, motorcycle, and so on. To improve the classification performance, we exploit a technique called joint fine-tuning (JF). In addition, we propose a dropping CNN (DropCNN) method to create a synergy effect with the JF. For the localization, we implement basic concepts of state-of-the-art region based detector combined with a backbone convolutional feature extractor using 50 and 101 layers of residual networks and ensemble them into a single model. Finally, we achieved the highest accuracy in both classification and localization tasks using the dataset among several state-of-the-art methods, including VGG16, AlexNet, and ResNet50 for the classification, and YOLO Faster R-CNN, and SSD for the localization reported on the website.


Revista De Informática Teórica E Aplicada | 2013

Speaker Dependent Visual Speech Recognition by Symbol and Real Value Assignment

Jeongwoo Ju; Heechul Jung; Junmo Kim

In this paper, we propose a visual speech recognition method using symbol or real value assignment. Our method is inspired by Bag of Word (BoW) [1] model which is usually applied to an object matching problem. In the BoW model, a codebook is produced by using K-means clustering, and a feature vector extracted from an image is converted to corresponding symbol. Similarly, we generate codebook by running K-means algorithm on a pool of pHog (Pyramid Histogram of Oriented Gradients) feature vectors extracted from a subset of lip database. Then, the remaining lip images are assigned a particular value after comparing the chi-square distance to each cluster. Based on the type of this value, two methods are suggested so as to assign the value to a lip image frame. The first method is to find the cluster whose element image has the minimum chi square distance to the processing frame, and assign the cluster label to the frame. Second one is to calculate the distances between the frame and all cluster’s centroids, obtain multi-dimensional vector for the frame which directly becomes an assigned value for the frame. Following these methods, each time sequence is converted into symbolized or multi-dimensional real valued sequence. To measure the similarity between two time sequences, we use Dynamic Time Warping for real valued time sequence and Edit distance for symbolized sequences.


international conference on consumer electronics | 2016

Real-time personalized facial expression recognition system based on deep learning

Injae Lee; Heechul Jung; Chung Hyun Ahn; Jeongil Seo; Junmo Kim; Oh-Seok Kwon

Over the last few years, deep learning has produced breakthrough results in many application fields including speech recognition, image understanding and so on. We try to deep learning techniques for real-time facial expression recognition instead of hand-crafted feature-based methods. The proposed system can recognize human emotions based on facial expressions using a webcam. It can detect faces and recognize users with a distance of 2~3m for TV environment. And it can determine whether a user is feeling happiness, sadness, surprise, anger, disgust, neutral or any combination of those six emotions. The experimental results show that the proposed method achieves high accuracy. It can be used for various services such as consumer behavior research, usability studies, psychology, educational research, and market research.


international conference on consumer electronics | 2013

High-definition video-based multi-channel top-view vehicle surrounding monitoring system for mobile navigation devices

SungRyull Sohn; Hansang Lee; Heechul Jung; Junmo Kim

Providing visual information around automobile can make drivers blind-spot-free, which is helpful for several occasions such as parking, lane change and backward movement. In this paper, the vehicle surrounding monitoring system with multiple channel high-definition (HD) videos, which is embedded in portable navigation devices, is presented. The system collects three channel video inputs from rear, left, and right side of the vehicle, transforms each inputs into top-view image, respectively. Finally, the transformed outputs are aligned, and then displayed on the screen for assisting the driver. Implementation and its results showed that the system provides HD vehicle surrounding monitoring video which is helpful especially in parking assistance.

Collaboration


Dive into the Heechul Jung's collaboration.

Top Co-Authors

Avatar

Injae Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Chung-Hyun Ahn

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Soon Kwon

Daegu Gyeongbuk Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge