Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyunsurk Ryu is active.

Publication


Featured researches published by Hyunsurk Ryu.


IEEE Transactions on Neural Networks | 2014

Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas

Jun Haeng Lee; Tobi Delbruck; Michael Pfeiffer; Paul K. J. Park; Chang-Woo Shin; Hyunsurk Ryu; Byung Chang Kang

We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naïve users.


IEEE Communications Magazine | 2011

Synchronization of audio/video bridging networks using IEEE 802.1AS

Geoffrey M. Garner; Hyunsurk Ryu

The Audio/Video Bridging project in the IEEE 802.1 working group is focused on the transport of time-sensitive traffic over IEEE 802 bridged networks. Current bridged networks do not have mechanisms that enable meeting these requirements under general traffic conditions. IEEE 802.1AS is the AVB standard that will specify requirements to allow for transport of precise timing and synchronization in AVB networks. It is based on IEEE 1588-2008, includes a PTP profile that is applicable to full-duplex IEEE 802.3 transport, and adds specifications for timing transport over IEEE 802.11, IEEE 802.3 EPON, and CSN media. This article provides a tutorial on IEEE 802.1AS that updates earlier descriptions, and new simulation results for timing performance.


international symposium on circuits and systems | 2012

Live demonstration: Gesture-based remote control using stereo pair of dynamic vision sensors

Junhaeng Lee; Tobi Delbruck; Paul K. J. Park; Michael Pfeiffer; Chang-Woo Shin; Hyunsurk Ryu; Byung-Chang Kang

This demonstration shows a natural gesture interface for console entertainment devices using as input a stereo pair of dynamic vision sensors. The event-based processing of the sparse sensor output allows fluid interaction at a laptop processor load of less than 3%.


IEEE Transactions on Industrial Electronics | 2015

Proximity Sensing Based on a Dynamic Vision Sensor for Mobile Devices

Jae-Yeon Won; Hyunsurk Ryu; Tobi Delbruck; Jun Haeng Lee; Jiang Hu

A dynamic vision sensor (DVS) is a sensor that detects temporal contrast of brightness and has the fastest response time compared to conventional frame-based sensors, which detect static brightness per every frame. The fastest response time allows fast motion recognition, which is a very attractive function in the view of consumers. In particular, its low power consumption due to the event-based processing is a key feature for mobile applications. In recent smartphones based on touch screen, a proximity sensor is equipped to prevent malfunction due to undesired contacts with skin while calling. In addition, the main processor stops the operation of the touch screen and turns off the display, when any object is close to the proximity sensor, to achieve minimizing power consumption. Considering the importance of the power consumption and reliable operations, it is certain that proximity sensing is an essential part in a touch-screen-based smartphone. In this paper, a design of proximity sensing utilizing a DVS is proposed. It can estimate the distance from the DVS to an object by analyzing the spatial information of the reflection of an additional light source. It also uses a pattern recognition based on time-domain analysis of the reflection, during turning on of the light source, to avoid wrong proximity detection by noises such as other light sources and motions. The contributions of the proposed design are in three parts. First, it calculates the accurate distance in real time only with spatial information of the reflection. Second, the proposed design can eliminate environmental noises by using pattern matching based on time-domain analysis, whereas conventional optical proximity sensors, which are mainly used in smartphones, are very sensitive to environmental noises because they use the total amount of brightness for certain periods. Third, our design replaces conventional proximity sensors by holding additional benefits that it utilizes the advantages of DVS.


international conference on image processing | 2012

Touchless hand gesture UI with instantaneous responses

Jun Haeng Lee; Paul K. J. Park; Chang-Woo Shin; Hyunsurk Ryu; Byung Chang Kang; Tobi Delbruck

In this paper we present a simple technique for real-time touchless hand gesture user interface (UI) for mobile devices based on a biologically inspired vision sensor, the dynamic vision sensor (DVS). The DVS can detect a moving object in a fast and cost effective way by outputting events asynchronously on edges of the object. The output events are spatiotemporally correlated by using novel event-driven processing algorithms based on leaky integrate-and-fire neurons to track a finger tip or to infer directions of hand swipe motions. The experimental results show that the proposed technique can achieve graphic UI capable finger tip tracking with milliseconds intervals and accurate hand swipe motion detection with negligible latency.


consumer communications and networking conference | 2006

End-to-end stream establishment in consumer home networks

Feifei Feng; Hyunsurk Ryu; K. den Hollander

This paper proposes a scheme for end-to-end stream establishment across a layer-2 Residential Ethernet (ResE) and the upper layer UPnP stack in consumer home networks. We first introduce a new proposal for a ResE subscription protocol. This protocol is used to set up guaranteed QoS layer-2 connections between ResE stations. We then propose an extension to the UPnP-AV architecture that enables a seamless integration of UPnP-AV applications and ResE layer 2 technologies. Our subscription protocol proposal is found to be suitable for this integration. The signaling to establish end-to-end AV streams in UPnP/ResE networks is described, and an example usage scenario is demonstrated.


ieee global conference on consumer electronics | 2012

Four DoF gesture recognition with an event-based image sensor

Kyoobin Lee; Hyunsurk Ryu; Seung-Kwon Park; Jun Haeng Lee; Paul-K Park; Chang-Woo Shin; Jooyeon Woo; Tae-Chan Kim; Byung-Chang Kang

An algorithm to recognize four degrees of freedom gesture by using event-based image sensor is developed. The gesture motion includes three translations and one rotation. Each pixel of the event-based image sensor produces an event when temporal intensity change is larger than a pre-defined value. From the time-stamps of the events, a map of pseudo optical flow is calculated. The proposed algorithm achieves the gesture recognition based on this optical flow. It provides not only directions but also magnitudes of velocity. The proposed algorithm is memory-wise and computationally efficient because it uses only a current time-stamp map and local computation. This advantage will facilitate applications for mobile devices or on-chip development.


international symposium on consumer electronics | 2014

Dynamic vision sensor for low power applications

Raphael Berner; Patrick Lichtsteiner; Tobi Delbruck; Jun-Youn Kim; Kyoobin Lee; Junhaeng Lee; Kyung-Bae Park; Tae-Sang Kim; Hyunsurk Ryu

This paper presents the design of a dynamic vision sensor for mobile applications. The sensor features a standby mode with less than 250uW of power dissipation. The sensor changes operation mode between standby and normal operation itself depending on input activity. The power consumption in normal operation mode is typically 500uW and activity dependent. The sensor design is implemented in a 90nm backside illumination process with a pixel-array size of 1.44mm by 1.44mm.


high performance computing and communications | 2006

Effect of flow aggregation on the maximum end-to-end delay

Jinoo Joung; Byeongseog Choe; Hongkyu Jeong; Hyunsurk Ryu

We investigate the effect of flow aggregation on the end-to-end delay in large scale networks. We show that networks with Differentiated services (DiffServ) architectures, where packets are treated according to the class they belong, can guarantee the end-to-end delay for packets of the highest priority class, which are queued and scheduled with a strict priority, but without preemption. We then analyze the network with arbitrary flow aggregation and deaggregation, and again derive an upper bound on the end-to-end delay. Throughout the paper we use Latency-Rate (


international solid-state circuits conference | 2017

4.1 A 640×480 dynamic vision sensor with a 9µm pixel and 300Meps address-event representation

Bongki Son; Yunjae Suh; Sungho Kim; Heejae Jung; Jun-Seok Kim; Chang-Woo Shin; Keunju Park; Kyoobin Lee; Jin Man Park; Jooyeon Woo; Yohan J. Roh; Hyunku Lee; Yibing Michelle Wang; Ilia Ovsiannikov; Hyunsurk Ryu

{\mathcal{LR}}

Collaboration


Dive into the Hyunsurk Ryu's collaboration.

Researchain Logo
Decentralizing Knowledge