Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chang-Woo Shin is active.

Publication


Featured researches published by Chang-Woo Shin.


IEEE Transactions on Neural Networks | 2014

Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas

Jun Haeng Lee; Tobi Delbruck; Michael Pfeiffer; Paul K. J. Park; Chang-Woo Shin; Hyunsurk Ryu; Byung Chang Kang

We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naïve users.


international symposium on circuits and systems | 2012

Live demonstration: Gesture-based remote control using stereo pair of dynamic vision sensors

Junhaeng Lee; Tobi Delbruck; Paul K. J. Park; Michael Pfeiffer; Chang-Woo Shin; Hyunsurk Ryu; Byung-Chang Kang

This demonstration shows a natural gesture interface for console entertainment devices using as input a stereo pair of dynamic vision sensors. The event-based processing of the sparse sensor output allows fluid interaction at a laptop processor load of less than 3%.


international conference on image processing | 2012

Touchless hand gesture UI with instantaneous responses

Jun Haeng Lee; Paul K. J. Park; Chang-Woo Shin; Hyunsurk Ryu; Byung Chang Kang; Tobi Delbruck

In this paper we present a simple technique for real-time touchless hand gesture user interface (UI) for mobile devices based on a biologically inspired vision sensor, the dynamic vision sensor (DVS). The DVS can detect a moving object in a fast and cost effective way by outputting events asynchronously on edges of the object. The output events are spatiotemporally correlated by using novel event-driven processing algorithms based on leaky integrate-and-fire neurons to track a finger tip or to infer directions of hand swipe motions. The experimental results show that the proposed technique can achieve graphic UI capable finger tip tracking with milliseconds intervals and accurate hand swipe motion detection with negligible latency.


ieee global conference on consumer electronics | 2012

Four DoF gesture recognition with an event-based image sensor

Kyoobin Lee; Hyunsurk Ryu; Seung-Kwon Park; Jun Haeng Lee; Paul-K Park; Chang-Woo Shin; Jooyeon Woo; Tae-Chan Kim; Byung-Chang Kang

An algorithm to recognize four degrees of freedom gesture by using event-based image sensor is developed. The gesture motion includes three translations and one rotation. Each pixel of the event-based image sensor produces an event when temporal intensity change is larger than a pre-defined value. From the time-stamps of the events, a map of pseudo optical flow is calculated. The proposed algorithm achieves the gesture recognition based on this optical flow. It provides not only directions but also magnitudes of velocity. The proposed algorithm is memory-wise and computationally efficient because it uses only a current time-stamp map and local computation. This advantage will facilitate applications for mobile devices or on-chip development.


international solid-state circuits conference | 2017

4.1 A 640×480 dynamic vision sensor with a 9µm pixel and 300Meps address-event representation

Bongki Son; Yunjae Suh; Sungho Kim; Heejae Jung; Jun-Seok Kim; Chang-Woo Shin; Keunju Park; Kyoobin Lee; Jin Man Park; Jooyeon Woo; Yohan J. Roh; Hyunku Lee; Yibing Michelle Wang; Ilia Ovsiannikov; Hyunsurk Ryu

We report a VGA dynamic vision sensor (DVS) with a 9µm pixel, developed through a digital as well as an analog implementation. DVS systems in the literature try to increase spatial resolution up to QVGA [1–2] and data rates up to 50 million events per second (Meps) (self-acknowledged) [3], but they are still inadequate for high-performance applications such as gesture recognition, drones, automotive, etc. Moreover, the smallest reported pixel of 18.5µm is too large for economical mass production [3]. This paper reports a 640×480 VGA-resolution DVS system with a 9µm pixel pitch supporting a data rate of 300Meps for sufficient event transfer in spite of higher resolution. Maintaining acceptable pixel performance, the pixel circuitry is carefully designed and optimized using a BSI CIS process. To acquire data (i.e., pixel events) at high speed even with high resolution (e.g., VGA), a fully synthesized word-serial group address-event representation (G-AER) is implemented, which handles massive events in parallel by binding neighboring 8 pixels into a group. In addition, a 10b programmable bias generator dedicated to a DVS system provides easy controllability of pixel biases and event thresholds.


international conference on image processing | 2016

Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique

Paul K. J. Park; Baek Hwan Cho; Jin Man Park; Kyoobin Lee; Ha Young Kim; Hyo A Kang; Hyun Goo Lee; Jooyeon Woo; Yohan J. Roh; Won Jo Lee; Chang-Woo Shin; Qiang Wang; Hyunsurk Ryu

We propose a novel method for the demosaicing of event-based images that offers substantial performance improvement of far-distance gesture recognition based on deep Convolutional Neural Network. Unlike the conventional demosaicing technique using the spatial color interpolation of Bayer patterns, our new approach utilizes spatiotemporal correlation between pixel arrays, whereby timestamps of high-resolution pixels are efficiently generated in real-time from the event data. In this paper, we describe this new method and evaluate its performance with a hand motion recognition task.


international conference on image processing | 2015

Computationally efficient, real-time motion recognition based on bio-inspired visual and cognitive processing.

Paul K. J. Park; Kyoobin Lee; Jun Haeng Lee; Byungkon Kang; Chang-Woo Shin; Jooyeon Woo; Jun-Seok Kim; Yunjae Suh; Sungho Kim; Saber Moradi; Ogan Gurel; Hyunsurk Ryu

We propose a novel method for identifying and classifying motions that offers significantly reduced computational cost as compared to deep convolutional neural network systems with comparable performance. Our new approach is inspired by the information processing network architecture of biological visual processing systems, whereby spatial pyramid kernel features are efficiently extracted in real-time from temporally-differentiated image data. In this paper, we describe this new method and evaluate its performance with a hand motion gesture recognition task.


international conference on image processing | 2014

Real-time motion estimation based on event-based vision sensor

Jun Haeng Lee; Kyoobin Lee; Hyunsurk Ryu; Paul K. J. Park; Chang-Woo Shin; Jooyeon Woo; Jun-Seok Kim

Fast and efficient motion estimation is essential for a number of applications including the gesture-based user interface (UI) for portable devices like smart phones. In this paper, we propose a highly efficient method that can estimate four degree of freedom (DOF) motional components of a moving object based on an event-based vision sensor, the dynamic vision sensor (DVS). The proposed method finds informative events occurred at edges and estimates their velocities for global motion analysis. We will also describe a novel method to correct the aperture problem in the motion estimation.


Archive | 2014

METHOD AND APPARATUS FOR USER INTERFACE BASED ON GESTURE

Chang-Woo Shin; Hyun Surk Ryu; Jooyeon Woo


Archive | 2013

EVENT-BASED IMAGE PROCESSING APPARATUS AND METHOD

Kyoobin Lee; Hyun Surk Ryu; Jun Haeng Lee; Keun Joo Park; Chang-Woo Shin; Jooyeon Woo

Collaboration


Dive into the Chang-Woo Shin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyun Surk Ryu

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge