Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyung Jin Chang is active.

Publication


Featured researches published by Hyung Jin Chang.


computer vision and pattern recognition | 2014

Latent Regression Forest: Structured Estimation of 3D Articulated Hand Posture

Danhang Tang; Hyung Jin Chang; Alykhan Tejani; Tae-Kyun Kim

In this paper we present the Latent Regression Forest (LRF), a novel framework for real-time, 3D hand pose estimation from a single depth image. In contrast to prior forest-based methods, which take dense pixels as input, classify them independently and then estimate joint positions afterwards, our method can be considered as a structured coarse-to-fine search, starting from the centre of mass of a point cloud until locating all the skeletal joints. The searching process is guided by a learnt Latent Tree Model which reflects the hierarchical topology of the hand. Our main contributions can be summarised as follows: (i) Learning the topology of the hand in an unsupervised, data-driven manner. (ii) A new forest-based, discriminative framework for structured search in images, as well as an error regression step to avoid error accumulation. (iii) A new multi-view hand pose dataset containing 180K annotated images from 10 different subjects. Our experiments show that the LRF out-performs state-of-the-art methods in both accuracy and efficiency.


computer vision and pattern recognition | 2013

Detection of Moving Objects with Non-stationary Cameras in 5.8ms: Bringing Motion Detection to Your Mobile Device

Kwang Moo Yi; Kimin Yun; Soo Wan Kim; Hyung Jin Chang; Hawook Jeong; Jin Young Choi

Detecting moving objects on mobile cameras in real-time is a challenging problem due to the computational limits and the motions of the camera. In this paper, we propose a method for moving object detection on non-stationary cameras running within 5.8 milliseconds (ms) on a PC, and real-time on mobile devices. To achieve real time capability with satisfying performance, the proposed method models the background through dual-mode single Gaussian model (SGM) with age and compensates the motion of the camera by mixing neighboring models. Modeling through dual-mode SGM prevents the background model from being contaminated by foreground pixels, while still allowing the model to be able to adapt to changes of the background. Mixing neighboring models reduces the errors arising from motion compensation and their influences are further reduced by keeping the age of the model. Also, to decrease computation load, the proposed method applies one dual-mode SGM to multiple pixels without performance degradation. Experimental results show the computational lightness and the real-time capability of our method on a smart phone with robust detection performances.


Pattern Recognition | 2014

Robust action recognition using local motion and group sparsity

Jungchan Cho; Minsik Lee; Hyung Jin Chang; Songhwai Oh

Recognizing actions in a video is a critical step for making many vision-based applications possible and has attracted much attention recently. However, action recognition in a video is a challenging task due to wide variations within an action, camera motion, cluttered background, and occlusions, to name a few. While dense sampling based approaches are currently achieving the state-of-the-art performance in action recognition, they do not perform well for many realistic video sequences since, by considering every motion found in a video equally, the discriminative power of these approaches is often reduced due to clutter motions, such as background changes and camera motions. In this paper, we robustly identify local motions of interest in an unsupervised manner by taking advantage of group sparsity. In order to robustly classify action types, we emphasize local motion by combining local motion descriptors and full motion descriptors and apply group sparsity to the emphasized motion features using the multiple kernel method. In experiments, we show that different types of actions can be well recognized using a small number of selected local motion descriptors and the proposed algorithm achieves the state-of-the-art performance on popular benchmark datasets, outperforming existing methods. We also demonstrate that the group sparse representation with the multiple kernel method can dramatically improve the action recognition performance.


computer vision and pattern recognition | 2016

Visual Tracking Using Attention-Modulated Disintegration and Integration

Jongwon Choi; Hyung Jin Chang; Jiyeoup Jeong; Yiannis Demiris; Jin Young Choi

In this paper, we present a novel attention-modulated visual tracking algorithm that decomposes an object into multiple cognitive units, and trains multiple elementary trackers in order to modulate the distribution of attention according to various feature and kernel types. In the integration stage it recombines the units to memorize and recognize the target object effectively. With respect to the elementary trackers, we present a novel attentional feature-based correlation filter (AtCF) that focuses on distinctive attentional features. The effectiveness of the proposed algorithm is validated through experimental comparison with state-of-theart methods on widely-used tracking benchmark datasets.


IEEE Transactions on Visualization and Computer Graphics | 2015

3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint

Youngkyoon Jang; Seung-Tak Noh; Hyung Jin Chang; Tae-Kyun Kim; Woontack Woo

In this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contributions include: (i) a rotation and translation invariant finger clicking action and position estimation using the combination of 2D image-based fingertip detection with 3D hand posture estimation in egocentric viewpoint. (ii) a novel spatio-temporal random forest, which performs the detection and estimation efficiently in a single framework. We also present (iii) a selection process utilizing the proposed clicking action detection and position estimation in an arm reachable AR/VR space, which does not require any additional device. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD.In this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contributions include: (i) a rotation and translation invariant finger clicking action and position estimation using the combination of 2D image-based fingertip detection with 3D hand posture estimation in egocentric viewpoint. (ii) a novel spatio-temporal random forest, which performs the detection and estimation efficiently in a single framework. We also present (iii) a selection process utilizing the proposed clicking action detection and position estimation in an arm reachable AR/VR space, which does not require any additional device. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD.


Computer Vision and Image Understanding | 2012

Robust moving object detection against fast illumination change

JinMin Choi; Hyung Jin Chang; Yung Jun Yoo; Jin Young Choi

To solve the problem due to fast illumination change in a visual surveillance system, we propose a novel moving object detection algorithm for which we develop an illumination change model, a chromaticity difference model, and a brightness ratio model. When fast illumination change occurs, background pixels as well as moving object pixels are detected as foreground pixels. To separate detected foreground pixels into moving object pixels and false foreground pixels, we develop a chromaticity difference model and a brightness ratio model that estimates the intensity difference and intensity ratio of false foreground pixels, respectively. These models are based on the proposed illumination change model. Based on experimental results, the proposed method shows excellent performance under various illumination change conditions while operating in real-time.


intelligent robots and systems | 2015

User modelling for personalised dressing assistance by humanoid robots

Yixing Gao; Hyung Jin Chang; Yiannis Demiris

Assistive robots can improve the well-being of disabled or frail human users by reducing the burden that activities of daily living impose on them. To enable personalised assistance, such robots benefit from building a user-specific model, so that the assistance is customised to the particular set of user abilities. In this paper, we present an end-to-end approach for home-environment assistive humanoid robots to provide personalised assistance through a dressing application for users who have upper-body movement limitations. We use randomised decision forests to estimate the upper-body pose of users captured by a top-view depth camera, and model the movement space of upper-body joints using Gaussian mixture models. The movement space of each upper-body joint consists of regions with different reaching capabilities. We propose a method which is based on real-time upper-body pose and user models to plan robot motions for assistive dressing. We validate each part of our approach and test the whole system, allowing a Baxter humanoid robot to assist human to wear a sleeveless jacket.


IEEE Transactions on Consumer Electronics | 2009

Optical image stabilizing system using multirate fuzzy PID controller for mobile device camera

Hyung Jin Chang; Pyo Jae Kim; Dong Sung Song; Jin Young Choi

A new optical image stabilizing system for a small mobile device camera is presented. A gyro sensor is used to detect the amount of shaking, and a charge-coupled device (CCD) is shifted to correct the deviated optical axis using a voice coil motor (VCM). Because the VCM is nonlinear, unstable, and time-varying, a new adaptive control technique--multirate fuzzy PID control - is proposed. Our new method is capable of providing improved control with low power consumption. We show clear, stabilized results for a variety of digital photographs taken under conditions of vibration.


computer vision and pattern recognition | 2017

Attentional Correlation Filter Network for Adaptive Visual Tracking

Jongwon Choi; Hyung Jin Chang; Sangdoo Yun; Tobias Fischer; Yiannis Demiris; Jin Young Choi

We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency. The subset of filters is adaptively selected by a deep attentional network according to the dynamic properties of the tracking target. Our contributions are manifold, and are summarised as follows: (i) Introducing the Attentional Correlation Filter Network which allows adaptive tracking of dynamic targets. (ii) Utilising an attentional network which shifts the attention to the best candidate modules, as well as predicting the estimated accuracy of currently inactive modules. (iii) Enlarging the variety of correlation filters which cover target drift, blurriness, occlusion, scale changes, and flexible aspect ratio. (iv) Validating the robustness and efficiency of the attentional mechanism for visual tracking through a number of experiments. Our method achieves similar performance to non real-time trackers, and state-of-the-art performance amongst real-time trackers.


computer vision and pattern recognition | 2012

Active attentional sampling for speed-up of background subtraction

Hyung Jin Chang; Hawook Jeong; Jin Young Choi

In this paper, we present an active sampling method to speed up conventional pixel-wise background subtraction algorithms. The proposed active sampling strategy is designed to focus on attentional region such as foreground regions. The attentional region is estimated by detection results of previous frame in a recursive probabilistic way. For the estimation of the attentional region, we propose a foreground probability map based on temporal, spatial, and frequency properties of foregrounds. By using this foreground probability map, active attentional sampling scheme is developed to make a minimal sampling mask covering almost foregrounds. The effectiveness of the proposed active sampling method is shown through various experiments. The proposed masking method successfully speeds up pixel-wise background subtraction methods approximately 6.6 times without deteriorating detection performance. Also realtime detection with Full HD video is successfully achieved through various conventional background subtraction algorithms.

Collaboration


Dive into the Hyung Jin Chang's collaboration.

Top Co-Authors

Avatar

Jin Young Choi

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tae-Kyun Kim

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danhang Tang

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Hawook Jeong

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kwang Moo Yi

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Pyo Jae Kim

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge