Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giyoung Lee is active.

Publication


Featured researches published by Giyoung Lee.


IEEE Signal Processing Letters | 2015

A Genetic Algorithm-Based Moving Object Detection for Real-time Traffic Surveillance

Giyoung Lee; Rammohan Mallipeddi; Gil-Jin Jang; Minho Lee

Recent developments in vision systems such as distributed smart cameras have encouraged researchers to develop advanced computer vision applications suitable to embedded platforms. In the embedded surveillance system, where memory and computing resources are limited, simple and efficient computer vision algorithms are required. In this letter, we present a moving object detection method for real-time traffic surveillance applications. The proposed method is a combination of a genetic dynamic saliency map (GDSM), which is an improved version of dynamic saliency map (DSM) and background subtraction. The experimental results show the effectiveness of the proposed method in detecting moving objects.


Computer-Aided Engineering | 2014

Action-perception cycle learning for incremental emotion recognition in a movie clip using 3D fuzzy GIST based on visual and EEG signals

Giyoung Lee; Mingu Kwon; Swathi Kavuri; Minho Lee

Emotions are regarded as the complex programs of internal actions triggered by the perception of visual stimuli. To understand human emotions in a more natural situation, we use dynamic stimuli such as movies for the analysis. Electroencephalography EEG signals evoked while watching the movie clip are also used to understand subject specific emotions for the movies. To benefit from the integrated ways that human perceive emotions, this paper proposes a mathematical framework to incorporate the link between two modalities to highly interact with in an action-perception cycle, which uses incremental concepts for understanding the complex human emotions over time. Incremental adaptive neuro-fuzzy inference system ANFIS is used to autonomously learn new emotional states from the information available over time. The system automatically adjusts or increases the rules for clustering the features in a fuzzy domain based on the interactions. After improving the recognition of individual sub-systems, the emotional descriptors from both channels are concatenated to be used as inputs in the incremental ANFIS in the next stage in order to classify a movie clip into a positive or negative emotion. Utilizing the action-perception cycle, the system can autonomously develop the ability to recognize complex human emotions through interactions with the environment. The mean opinion score MOS is used as ground truth to evaluate the performance of the proposed emotion recognition system.


International Journal of Psychophysiology | 2015

Modulation of resource allocation by intelligent individuals in linguistic, mathematical and visuo-spatial tasks

Giyoung Lee; Amitash Ojha; Jun-Su Kang; Minho Lee

This study investigates two questions: first, how individuals with high-intelligence allocate cognitive resources while solving linguistic, mathematical and visuo-spatial tasks with varying degree of difficulty as compared to individuals with low intelligence? Second, how to distinguish between high and low intelligent individuals by analyzing pupil dilation and eye blink together? We measured the response time, error rates along with pupil dilation and eye blink rate that indicate resource allocation. We divided the whole processing into three stages namely: pre-stimuli (5s prior to stimuli onset), during stimuli and post stimuli (until 5s after the response) for better assessment of preparation and resource allocation strategies. Individuals with high intelligence showed greater task evoked pupil dilation, decreased eye blink with less response time and error rates during-stimuli stage (processing) of tough linguistic and visuo-spatial tasks but not during mathematical tasks. The finding suggests that individuals with high intelligence allocate more resources if the task demands are high else they allocate less resources. Greater pre-stimuli pupil dilation and increased eye blink of high intelligent individuals in all tasks indicated their attentiveness and preparedness. The result of our study shows that individuals with high intelligence are more attentive and flexible in terms of altering the resource allocation strategy according to task demand. Eye-blinks along with pupil dilation and other behavioral parameters can be reliably used to assess the intelligence of an individual and the analysis of pupil dilation and blink rate at pre-stimuli stage can be crucial in distinguishing individuals with varying intelligence.


international conference on neural information processing | 2012

Identification of moving vehicle trajectory using manifold learning

Giyoung Lee; Rammohan Mallipeddi; Minho Lee

We present a method to identify the trajectories of moving vehicles from various viewpoints using manifold learning to be implemented on an embedded platform for traffic surveillance. We use a robust kernel Isomap to estimate the intrinsic low-dimensional manifold of input space. During training, the extracted features of the training data are projected on to a 2D manifold and features corresponding to each trajectory are clustered in to k clusters, each represented as a Gaussian model. During identification, features of test data are projected on to the 2D manifold constructed during training and the Mahalanobis distance between test data and Gaussian models of each trajectory is evaluated to identify the trajectory. Experimental results demonstrate the effectiveness of the proposed method in estimating the trajectories of the moving vehicles, even though shapes and sizes of vehicles change rapidly.


international conference on neural information processing | 2011

Intelligent Video Surveillance System Using Dynamic Saliency Map and Boosted Gaussian Mixture Model

Wono Lee; Giyoung Lee; Sang-Woo Ban; Ilkyun Jung; Minho Lee

In this paper, we propose an intelligent video camera system for traffic surveillance, which can detect moving objects in road, recognize the types of objects, and track their moving trajectories. A dynamic saliency map based object detection model is proposed to robustly detect a moving object against light condition change. A Gaussian mixture model (GMM) integrated with an Adaboosting algorithm is proposed for classifying the detected objects into vehicles, pedestrian and background. The GMM uses C1-like features of HMAX model as input features, which are robust to image translation and scaling. And a local appearance model is also proposed for object tracking. Experimental results plausibly demonstrate the excellence performance of the proposed system.


Expert Systems With Applications | 2017

Trajectory-based vehicle tracking at low frame rates

Giyoung Lee; Rammohan Mallipeddi; Minho Lee

A new vehicle tracking method is proposed for an embedded traffic surveillance system.The proposed method demonstrates efficient tracking performance at a low frame rate.The proposed method employs greedy data association based on appearance and position similarities.To manage abrupt appearance changes, manifold learning is used.To manage abrupt motion changes, trajectory information is used to predict the next probable position. In smart cities, an intelligent traffic surveillance system plays a crucial role in reducing traffic jams and air pollution, thus improving the quality of life. An intelligent traffic surveillance should be able to detect and track multiple vehicles in real-time using only limited resources. Conventional tracking methods usually run at a high video-sampling rate, assuming that the same vehicles in successive frames are similar and move only slightly. However, in cost effective embedded surveillance systems (e.g., a distributed wireless network of smart cameras), video frame rates are typically low because of limited system resources. Therefore, conventional tracking methods perform poorly in embedded surveillance systems because of discontinuity of the moving vehicles in the captured recordings. In this study, we present a fast and light algorithm that is suitable for an embedded real-time visual surveillance system to detect effectively and track multiple moving vehicles whose appearance and/or position changes abruptly at a low frame rate. For effective tracking at low frame rates, we propose a new matching criterion based on greedy data association using appearance and position similarities between detections and trackers. To manage abrupt appearance changes, manifold learning is used to calculate appearance similarity. To manage abrupt changes in motion, the next probable centroid area of the tracker is predicted using trajectory information. The position similarity is then calculated based on the predicted next position and progress direction of the tracker. The proposed method demonstrates efficient tracking performance during rapid feature changes and is tested on an embedded platform (ARM with DSP-based system).


systems, man and cybernetics | 2013

Tracking Multiple Moving Vehicles in Low Frame Rate Videos Based on Trajectory Information

Giyoung Lee; Rammohan Mallipeddi; Minho Lee

In this paper, we present a method to track moving vehicles in low frame rate videos which are common in embedded traffic surveillance systems. In general, an embedded surveillance system has limited memory and computing resources, and thus the frame rate of video dramatically decreases. Hence, the features of moving vehicles such as shapes and sizes vary dramatically which is difficult to be handled using appearance and/or feature based conventional methods. In the proposed model, the probability distribution of a tracking vehicle in the next frame is predicted based on a hypothesis which is constructed by trajectory identification model using manifold learning. By the projecting on the low dimensional manifold, the probabilistic similarity between the observed and the predicted probability distributions of the tracking vehicles is measured. The probabilistic distribution with maximum similarity among several candidate hypotheses in the trajectory identification models is considered to include spatial information to track a moving vehicle. Experimental results show the effectiveness of the proposed method in tracking moving vehicles, even when the shapes, positions and sizes change rapidly.


international conference on neural information processing | 2015

Classification of High and Low Intelligent Individuals Using Pupil and Eye Blink

Giyoung Lee; Amitash Ojha; Minho Lee

A commonly used method to determine the intelligence of an individual is a group test. It checks accuracy and response time while they solve a series of problems. However, it takes long time and is often inaccurate if the difficulty level of problems is high or the number of problems is too small. Therefore, there is an urgent need to find an objective, readily available, fast and more reliable method to determine the intelligence level of individuals. In this paper, we propose an alternative method to distinguish between high and low intelligent individuals using pupillary response and eye blink pattern. Studies have shown that these measures indicate the cognitive state of an individual more accurately and objectively. Our experimental results show that the bio-signals between high and low intelligent individuals are significantly different and proposed method has good performance.


international conference on neural information processing | 2015

Autonomous Depth Perception of Humanoid Robot Using Binocular Vision System Through Sensorimotor Interaction with Environment

Yongsik Jin; Mallipeddi Rammohan; Giyoung Lee; Minho Lee

In this paper, we explore how a humanoid robot having two cameras can learn to improve depth perception by itself. We propose an approach that can autonomously improve depth estimation of the humanoid robot. This approach can tune parameters that are required for binocular vision system of the humanoid robot and improve depth perception automatically through interaction with environment. To set parameters of binocular vision system of the humanoid robot, the robot utilizes sensory invariant driven action SIDA. The sensory invariant driven action SIDA gives identical sensory stimulus to the robot even though actions are not same. These actions are autonomously generated by the humanoid robot without the external control in order to improve depth perception. The humanoid robot can gather training data so as to tune parameters of binocular vision system from the sensory invariant driven action SIDA. Object size invariance OSI is used to examine whether or not current depth estimation is correct. If the current depth estimation is reliable, the robot tunes the parameters of binocular vision system based on object size invariance OSI again. The humanoid robot interacts with environment so as to understand a relation between the size of the object and distance to the object from the robot. Our approach shows that action plays an important role in the perception. Experimental results show that the proposed approach can successfully and automatically improve depth estimation of the humanoid robot.


human-agent interaction | 2015

Concentration Monitoring for Intelligent Tutoring System Based on Pupil and Eye-blink

Giyoung Lee; Amitash Ojha; Minho Lee

Monitoring the concentration level of a learner is important to maximize the learning effect, giving proper feedback on tasks and to understand the performance of learners in tasks. In this paper, we propose a personal concentration level monitoring system when a user performs an online task on a computer by analyzing his/her pupillary response and eye-blinking pattern. We use low-priced web camera to detect eye blinking pattern and a portable eye tracker to detect pupillary response. Experimental results show good performance of the proposed concentration level monitoring system and suggest that it can be used for various real applications such as intelligent tutoring system, e-learning system, etc.

Collaboration


Dive into the Giyoung Lee's collaboration.

Top Co-Authors

Avatar

Minho Lee

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Rammohan Mallipeddi

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Amitash Ojha

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Jun-Su Kang

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gil-Jin Jang

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Mallipeddi Rammohan

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Swathi Kavuri Sri

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Swathi Kavuri

Kyungpook National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge