Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun Haeng Lee is active.

Publication


Featured researches published by Jun Haeng Lee.


Frontiers in Neuroscience | 2016

Training Deep Spiking Neural Networks Using Backpropagation

Jun Haeng Lee; Tobi Delbruck; Michael Pfeiffer

Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.


IEEE Transactions on Neural Networks | 2014

Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas

Jun Haeng Lee; Tobi Delbruck; Michael Pfeiffer; Paul K. J. Park; Chang-Woo Shin; Hyunsurk Ryu; Byung Chang Kang

We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naïve users.


IEEE Transactions on Industrial Electronics | 2015

Proximity Sensing Based on a Dynamic Vision Sensor for Mobile Devices

Jae-Yeon Won; Hyunsurk Ryu; Tobi Delbruck; Jun Haeng Lee; Jiang Hu

A dynamic vision sensor (DVS) is a sensor that detects temporal contrast of brightness and has the fastest response time compared to conventional frame-based sensors, which detect static brightness per every frame. The fastest response time allows fast motion recognition, which is a very attractive function in the view of consumers. In particular, its low power consumption due to the event-based processing is a key feature for mobile applications. In recent smartphones based on touch screen, a proximity sensor is equipped to prevent malfunction due to undesired contacts with skin while calling. In addition, the main processor stops the operation of the touch screen and turns off the display, when any object is close to the proximity sensor, to achieve minimizing power consumption. Considering the importance of the power consumption and reliable operations, it is certain that proximity sensing is an essential part in a touch-screen-based smartphone. In this paper, a design of proximity sensing utilizing a DVS is proposed. It can estimate the distance from the DVS to an object by analyzing the spatial information of the reflection of an additional light source. It also uses a pattern recognition based on time-domain analysis of the reflection, during turning on of the light source, to avoid wrong proximity detection by noises such as other light sources and motions. The contributions of the proposed design are in three parts. First, it calculates the accurate distance in real time only with spatial information of the reflection. Second, the proposed design can eliminate environmental noises by using pattern matching based on time-domain analysis, whereas conventional optical proximity sensors, which are mainly used in smartphones, are very sensitive to environmental noises because they use the total amount of brightness for certain periods. Third, our design replaces conventional proximity sensors by holding additional benefits that it utilizes the advantages of DVS.


international conference on image processing | 2012

Touchless hand gesture UI with instantaneous responses

Jun Haeng Lee; Paul K. J. Park; Chang-Woo Shin; Hyunsurk Ryu; Byung Chang Kang; Tobi Delbruck

In this paper we present a simple technique for real-time touchless hand gesture user interface (UI) for mobile devices based on a biologically inspired vision sensor, the dynamic vision sensor (DVS). The DVS can detect a moving object in a fast and cost effective way by outputting events asynchronously on edges of the object. The output events are spatiotemporally correlated by using novel event-driven processing algorithms based on leaky integrate-and-fire neurons to track a finger tip or to infer directions of hand swipe motions. The experimental results show that the proposed technique can achieve graphic UI capable finger tip tracking with milliseconds intervals and accurate hand swipe motion detection with negligible latency.


ieee global conference on consumer electronics | 2012

Four DoF gesture recognition with an event-based image sensor

Kyoobin Lee; Hyunsurk Ryu; Seung-Kwon Park; Jun Haeng Lee; Paul-K Park; Chang-Woo Shin; Jooyeon Woo; Tae-Chan Kim; Byung-Chang Kang

An algorithm to recognize four degrees of freedom gesture by using event-based image sensor is developed. The gesture motion includes three translations and one rotation. Each pixel of the event-based image sensor produces an event when temporal intensity change is larger than a pre-defined value. From the time-stamps of the events, a map of pseudo optical flow is calculated. The proposed algorithm achieves the gesture recognition based on this optical flow. It provides not only directions but also magnitudes of velocity. The proposed algorithm is memory-wise and computationally efficient because it uses only a current time-stamp map and local computation. This advantage will facilitate applications for mobile devices or on-chip development.


international conference on image processing | 2015

Computationally efficient, real-time motion recognition based on bio-inspired visual and cognitive processing.

Paul K. J. Park; Kyoobin Lee; Jun Haeng Lee; Byungkon Kang; Chang-Woo Shin; Jooyeon Woo; Jun-Seok Kim; Yunjae Suh; Sungho Kim; Saber Moradi; Ogan Gurel; Hyunsurk Ryu

We propose a novel method for identifying and classifying motions that offers significantly reduced computational cost as compared to deep convolutional neural network systems with comparable performance. Our new approach is inspired by the information processing network architecture of biological visual processing systems, whereby spatial pyramid kernel features are efficiently extracted in real-time from temporally-differentiated image data. In this paper, we describe this new method and evaluate its performance with a hand motion gesture recognition task.


international conference on image processing | 2014

Real-time motion estimation based on event-based vision sensor

Jun Haeng Lee; Kyoobin Lee; Hyunsurk Ryu; Paul K. J. Park; Chang-Woo Shin; Jooyeon Woo; Jun-Seok Kim

Fast and efficient motion estimation is essential for a number of applications including the gesture-based user interface (UI) for portable devices like smart phones. In this paper, we propose a highly efficient method that can estimate four degree of freedom (DOF) motional components of a moving object based on an event-based vision sensor, the dynamic vision sensor (DVS). The proposed method finds informative events occurred at edges and estimates their velocities for global motion analysis. We will also describe a novel method to correct the aperture problem in the motion estimation.


Archive | 2011

Method and Apparatus for Motion Recognition

Jun Haeng Lee; Keun Joo Park


Archive | 2013

DEVICE AND METHOD FOR RECOGNIZING GESTURE BASED ON DIRECTION OF GESTURE

Kyoobin Lee; Hyun Surk Ryu; Jun Haeng Lee


Archive | 2014

IMAGE ADJUSTMENT APPARATUS AND IMAGE SENSOR FOR SYNCHRONOUS IMAGE AND ASYNCHRONOUS IMAGE

Keun Joo Park; Hyun Surk Ryu; Tae Chan Kim; Jun Haeng Lee

Collaboration


Dive into the Jun Haeng Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyun Surk Ryu

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge