Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jooyeon Woo is active.

Publication


Featured researches published by Jooyeon Woo.


ieee global conference on consumer electronics | 2012

Four DoF gesture recognition with an event-based image sensor

Kyoobin Lee; Hyunsurk Ryu; Seung-Kwon Park; Jun Haeng Lee; Paul-K Park; Chang-Woo Shin; Jooyeon Woo; Tae-Chan Kim; Byung-Chang Kang

An algorithm to recognize four degrees of freedom gesture by using event-based image sensor is developed. The gesture motion includes three translations and one rotation. Each pixel of the event-based image sensor produces an event when temporal intensity change is larger than a pre-defined value. From the time-stamps of the events, a map of pseudo optical flow is calculated. The proposed algorithm achieves the gesture recognition based on this optical flow. It provides not only directions but also magnitudes of velocity. The proposed algorithm is memory-wise and computationally efficient because it uses only a current time-stamp map and local computation. This advantage will facilitate applications for mobile devices or on-chip development.


international solid-state circuits conference | 2017

4.1 A 640×480 dynamic vision sensor with a 9µm pixel and 300Meps address-event representation

Bongki Son; Yunjae Suh; Sungho Kim; Heejae Jung; Jun-Seok Kim; Chang-Woo Shin; Keunju Park; Kyoobin Lee; Jin Man Park; Jooyeon Woo; Yohan J. Roh; Hyunku Lee; Yibing Michelle Wang; Ilia Ovsiannikov; Hyunsurk Ryu

We report a VGA dynamic vision sensor (DVS) with a 9µm pixel, developed through a digital as well as an analog implementation. DVS systems in the literature try to increase spatial resolution up to QVGA [1–2] and data rates up to 50 million events per second (Meps) (self-acknowledged) [3], but they are still inadequate for high-performance applications such as gesture recognition, drones, automotive, etc. Moreover, the smallest reported pixel of 18.5µm is too large for economical mass production [3]. This paper reports a 640×480 VGA-resolution DVS system with a 9µm pixel pitch supporting a data rate of 300Meps for sufficient event transfer in spite of higher resolution. Maintaining acceptable pixel performance, the pixel circuitry is carefully designed and optimized using a BSI CIS process. To acquire data (i.e., pixel events) at high speed even with high resolution (e.g., VGA), a fully synthesized word-serial group address-event representation (G-AER) is implemented, which handles massive events in parallel by binding neighboring 8 pixels into a group. In addition, a 10b programmable bias generator dedicated to a DVS system provides easy controllability of pixel biases and event thresholds.


international conference on image processing | 2016

Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique

Paul K. J. Park; Baek Hwan Cho; Jin Man Park; Kyoobin Lee; Ha Young Kim; Hyo A Kang; Hyun Goo Lee; Jooyeon Woo; Yohan J. Roh; Won Jo Lee; Chang-Woo Shin; Qiang Wang; Hyunsurk Ryu

We propose a novel method for the demosaicing of event-based images that offers substantial performance improvement of far-distance gesture recognition based on deep Convolutional Neural Network. Unlike the conventional demosaicing technique using the spatial color interpolation of Bayer patterns, our new approach utilizes spatiotemporal correlation between pixel arrays, whereby timestamps of high-resolution pixels are efficiently generated in real-time from the event data. In this paper, we describe this new method and evaluate its performance with a hand motion recognition task.


international conference on image processing | 2015

Computationally efficient, real-time motion recognition based on bio-inspired visual and cognitive processing.

Paul K. J. Park; Kyoobin Lee; Jun Haeng Lee; Byungkon Kang; Chang-Woo Shin; Jooyeon Woo; Jun-Seok Kim; Yunjae Suh; Sungho Kim; Saber Moradi; Ogan Gurel; Hyunsurk Ryu

We propose a novel method for identifying and classifying motions that offers significantly reduced computational cost as compared to deep convolutional neural network systems with comparable performance. Our new approach is inspired by the information processing network architecture of biological visual processing systems, whereby spatial pyramid kernel features are efficiently extracted in real-time from temporally-differentiated image data. In this paper, we describe this new method and evaluate its performance with a hand motion gesture recognition task.


international conference on image processing | 2014

Real-time motion estimation based on event-based vision sensor

Jun Haeng Lee; Kyoobin Lee; Hyunsurk Ryu; Paul K. J. Park; Chang-Woo Shin; Jooyeon Woo; Jun-Seok Kim

Fast and efficient motion estimation is essential for a number of applications including the gesture-based user interface (UI) for portable devices like smart phones. In this paper, we propose a highly efficient method that can estimate four degree of freedom (DOF) motional components of a moving object based on an event-based vision sensor, the dynamic vision sensor (DVS). The proposed method finds informative events occurred at edges and estimates their velocities for global motion analysis. We will also describe a novel method to correct the aperture problem in the motion estimation.


Archive | 2014

METHOD AND APPARATUS FOR USER INTERFACE BASED ON GESTURE

Chang-Woo Shin; Hyun Surk Ryu; Jooyeon Woo


Archive | 2013

EVENT-BASED IMAGE PROCESSING APPARATUS AND METHOD

Kyoobin Lee; Hyun Surk Ryu; Jun Haeng Lee; Keun Joo Park; Chang-Woo Shin; Jooyeon Woo


Archive | 2014

Apparatus and method for processing user input using motion of object

Keun Joo Park; Jun Haeng Lee; Hyun Surk Ryu; Jun-Seok Kim; Chang-Woo Shin; Jooyeon Woo; Kyoobin Lee


Archive | 2015

METHOD AND APPARATUS FOR IDENTIFYING SPATIAL GESTURE OF USER

Jooyeon Woo; Eric Hyunsurk Ryu; Chang Woo Shin


Archive | 2014

Method and apparatus for sensing spatial information based on vision sensor

Chang-Woo Shin; Keun Joo Park; Jooyeon Woo; Kyoobin Lee

Collaboration


Dive into the Jooyeon Woo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyun Surk Ryu

Pohang University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge