Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Soo Wan Kim is active.

Publication


Featured researches published by Soo Wan Kim.


machine vision applications | 2013

Detection of moving objects with a moving camera using non-panoramic background model

Soo Wan Kim; Kimin Yun; Kwang Moo Yi; Sun Jung Kim; Jin Young Choi

This paper presents a fast and reliable method for moving object detection with moving cameras (including pan–tilt–zoom and hand-held cameras). Instead of building large panoramic background model as conventional approaches, we construct a small-size background model, whose size is the same as input frame, to decrease computation time and memory storage without loss of detection performance. The small-size background model is built by the proposed single spatio-temporal distributed Gaussian model and this can solve false detection results arising from registration error and background adaptation problem in moving background. More than the proposed background model based on spatial and temporal information, several pre- and post-processing methods are adopted and organized systematically to enhance the detection performances. We evaluate the proposed method with several video sequences under difficult conditions, such as illumination change, large zoom variation, and fast camera movement, and present outperforming detection results of our algorithm with fast computation time.


computer vision and pattern recognition | 2013

Detection of Moving Objects with Non-stationary Cameras in 5.8ms: Bringing Motion Detection to Your Mobile Device

Kwang Moo Yi; Kimin Yun; Soo Wan Kim; Hyung Jin Chang; Hawook Jeong; Jin Young Choi

Detecting moving objects on mobile cameras in real-time is a challenging problem due to the computational limits and the motions of the camera. In this paper, we propose a method for moving object detection on non-stationary cameras running within 5.8 milliseconds (ms) on a PC, and real-time on mobile devices. To achieve real time capability with satisfying performance, the proposed method models the background through dual-mode single Gaussian model (SGM) with age and compensates the motion of the camera by mixing neighboring models. Modeling through dual-mode SGM prevents the background model from being contaminated by foreground pixels, while still allowing the model to be able to adapt to changes of the background. Mixing neighboring models reduces the errors arising from motion compensation and their influences are further reduced by keeping the age of the model. Also, to decrease computation load, the proposed method applies one dual-mode SGM to multiple pixels without performance degradation. Experimental results show the computational lightness and the real-time capability of our method on a smart phone with robust detection performances.


international conference on pattern recognition | 2014

Motion Interaction Field for Accident Detection in Traffic Surveillance Video

Kimin Yun; Hawook Jeong; Kwang Moo Yi; Soo Wan Kim; Jin Young Choi

This paper presents a novel method for modeling of interaction among multiple moving objects to detect traffic accidents. The proposed method to model object interactions is motivated by the motion of water waves responding to moving objects on water surface. The shape of the water surface is modeled in a field form using Gaussian kernels, which is referred to as the Motion Interaction Field (MIF). By utilizing the symmetric properties of the MIF, we detect and localize traffic accidents without solving complex vehicle tracking problems. Experimental results show that our method outperforms the existing works in detecting and localizing traffic accidents.


image and vision computing new zealand | 2012

Visual tracking with dual modeling

Kwang Moo Yi; Hawook Jeong; Soo Wan Kim; Jin Young Choi

In this paper, a new visual tracking method with dual modeling is proposed. The proposed method aims to solve the problems of occlusions, background clutters, and drifting simultaneously with the proposed dual model. The dual model is consisted of single Gaussian models for the foreground and the background. Both models are combined to form a likelihood, which is then efficiently maximized for visual tracking through random sampling and mean-shift. Through dual modeling the proposed method becomes robust to occlusions and background clutters through exclusion of non-target information during maximization of the likelihood. Also, non-target information is unlearned from the foreground model to prevent drifting. The performance of the proposed method is extensively tested against six representative trackers with nine test sequence including two long-term sequences. The experimental results show that our method outperforms all other compared trackers.


Computer Vision and Image Understanding | 2014

Spatio-temporal weighting in local patches for direct estimation of camera motion in video stabilization

Soo Wan Kim; Shimin Yin; Kimin Yun; Jin Young Choi

This paper presents a robust video stabilization method by solving a novel formulation for the camera motion estimation. We introduce spatio-temporal weighting on local patches in optimization formulation, which enables one-step direct estimation without outlier elimination adopted in most existing methods. The spatio-temporal weighting represents the reliability of a local region in estimation of camera motion. The weighting emphasizes regions which have the similar motion to the camera motion, such as backgrounds, and reduces the influence of unimportant regions, such as moving objects. In this paper, we develop a formula to determine the spatio-temporal weights considering the age, edges, saliency, and distribution information of local patches. The proposed scheme reduces the computational load by eliminating the integration part of local motions and decreases accumulation of fitting errors in the existing two-step estimation methods. Through numerical experiments on several unstable videos, we verify that the proposed method gives better performance in camera motion estimation and stabilization of jittering video sequences.


digital image computing: techniques and applications | 2011

PIL-EYE: Integrated System for Sustainable Development of Intelligent Visual Surveillance Algorithms

Hyung Jin Chang; Kwang Moo Yi; Shimin Yin; Soo Wan Kim; Young Min Baek; Ho Seok Ahn; Jin Young Choi

In this paper, we introduce a new platform for integrated development of visual surveillance algorithms, named as PIL-EYE system. In our system, any functional modules and algorithms can be added or removed, not affecting other modules. Also, functional flow can be designed by simply scheduling the order of modules. Algorithm optimization becomes easy by checking computational load in real time. Furthermore, commercialization can be easily achieved by packaging of modules. The effectiveness of the proposed modular architecture is shown through several experiments with the implemented system.


Image and Vision Computing | 2015

Visual tracking of non-rigid objects with partial occlusion through elastic structure of local patches and hierarchical diffusion

Kwang Moo Yi; Hawook Jeong; Soo Wan Kim; Shimin Yin; Songhwai Oh; Jin Young Choi

In this paper, a tracking method based on sequential Bayesian inference is proposed. The proposed method focuses on solving both the problem of tracking under partial occlusions and the problem of non-rigid object tracking in real-time on a desktop personal computer (PC). The proposed method is mainly composed of two parts: (1) modeling the target object using elastic structure of local patches for robust performance; and (2) efficient hierarchical diffusion method to perform the tracking procedure in real-time. The elastic structure of local patches allows the proposed method to handle partial occlusions and non-rigid deformations through the relationship among neighboring patches. The proposed hierarchical diffusion method generates samples from the region where the posterior is concentrated to reduce computation time. The method is extensively tested on a number of challenging image sequences with occlusion and non-rigid deformation. The experimental results show the real-time capability and the robustness of the proposed method under various situations. Display Omitted We propose a tracking method to solve both the problem of partial occlusions and non-rigid deformations in real-time.The target object is modeled through an elastic structure of local patches for robust performance.Hierarchical diffusion method is proposed to obtain an acceptable solution in real time.Extensive evaluation shows that the proposed method outperforms state-of-the-art.


Pattern Recognition Letters | 2014

View invariant action recognition using generalized 4D features

Sun Jung Kim; Soo Wan Kim; Tushar Sandhan; Jin Young Choi

We recognize actions independently of viewpoints using generalized 4D features.We developed new 4D-STIPs with 3D space volumes by extending the widely used 3D-STIPs.Arbitrary view point can be generated by projecting 3D space volumes and 4D-STIPs.We proposed non-motion features to encode stationary part of an action.The proposed method outperforms when the test view is not in the training set. In this paper, we propose a method to recognize human actions independently of viewpoints by developing 4D space-time features which can generalize the information from a finite number of views in training phase so as to show a satisfactory performance in arbitrary testing views. This 4D space-time interest points (4D-STIPs, x , y , z , t ) are extracted using 3D space volumes reconstructed from images of a finite number of different views. Since the proposed features are constructed using volumetric information, the features for arbitrary 2D space viewpoint in testing can be generated by projecting the 3D space volumes and 4D-STIPs on corresponding test image planes. This enables action recognition in any camera viewpoint even after training with images from only a finite number of views. We also propose the variant of 3D space-time interest points, which take into account the simultaneous gradient variation in all 3 dimensions to focus on the motion of important spatial corner points. 3D space volumes and 4D-STIPs can be projected to arbitrary viewpoints for training each action to get generalization capability of the classifier. With these projected features, we construct motion history images and non-motion history images which encode the moving and non-moving parts of an action respectively. After reducing the feature dimension, the final features are learned by support vector data description method. In experiments, we train the models using IXMAS dataset constructed from five views and test them with a new SNU dataset made for evaluating the generalization performance for arbitrary view videos.


international conference on pattern recognition | 2010

Recovery Video Stabilization Using MRF-MAP Optimization

Soo Wan Kim; Kwang Moo Yi; Songhwai Oh; Jin Young Choi

In this paper, we propose a novel approach for video stabilization using Markov random field (MRF) modeling and maximum a posteriori (MAP) optimization. We build an MRF model describing a sequence of unstable images and find joint pixel matchings over all image sequences with MAP optimization via Gibbs sampling. The resulting displacements of matched pixels in consecutive frames indicate the camera motion between frames and can be used to remove the camera motion to stabilize image sequences. The proposed method shows robust performance even when a scene has moving foreground objects and brings more accurate stabilization results. The performance of our algorithm is evaluated on outdoor scenes.


international symposium on visual computing | 2014

Learning with Adaptive Rate for Online Detection of Unusual Appearance

Kimin Yun; Jiyun Kim; Soo Wan Kim; Hawook Jeong; Jin Young Choi

Detecting of unusual/abnormal event is a popular research in the area of event analysis. Unlike conventional methods that focus on the motion, we tackle a new problem for detecting an unusual appearance in a surveillance video. However, in case of appearance feature, static appearance is so dominant that the biased learning problem can occur. To avoid this problem, we propose a new learning scheme with adaptive learning rate. Moreover, to reduce the noisy detection, we also suggest a spatio-temporal decision scheme. Experimental results show the effectiveness of the proposed method to detect unusual appearances qualitatively and quantitatively.

Collaboration


Dive into the Soo Wan Kim's collaboration.

Top Co-Authors

Avatar

Jin Young Choi

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Kwang Moo Yi

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Hawook Jeong

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Kimin Yun

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Sangdoo Yun

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Shimin Yin

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Songhwai Oh

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Haanju Yoo

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Moonsub Byeon

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Sun Jung Kim

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge