Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kimin Yun is active.

Publication


Featured researches published by Kimin Yun.


machine vision applications | 2013

Detection of moving objects with a moving camera using non-panoramic background model

Soo Wan Kim; Kimin Yun; Kwang Moo Yi; Sun Jung Kim; Jin Young Choi

This paper presents a fast and reliable method for moving object detection with moving cameras (including pan–tilt–zoom and hand-held cameras). Instead of building large panoramic background model as conventional approaches, we construct a small-size background model, whose size is the same as input frame, to decrease computation time and memory storage without loss of detection performance. The small-size background model is built by the proposed single spatio-temporal distributed Gaussian model and this can solve false detection results arising from registration error and background adaptation problem in moving background. More than the proposed background model based on spatial and temporal information, several pre- and post-processing methods are adopted and organized systematically to enhance the detection performances. We evaluate the proposed method with several video sequences under difficult conditions, such as illumination change, large zoom variation, and fast camera movement, and present outperforming detection results of our algorithm with fast computation time.


computer vision and pattern recognition | 2013

Detection of Moving Objects with Non-stationary Cameras in 5.8ms: Bringing Motion Detection to Your Mobile Device

Kwang Moo Yi; Kimin Yun; Soo Wan Kim; Hyung Jin Chang; Hawook Jeong; Jin Young Choi

Detecting moving objects on mobile cameras in real-time is a challenging problem due to the computational limits and the motions of the camera. In this paper, we propose a method for moving object detection on non-stationary cameras running within 5.8 milliseconds (ms) on a PC, and real-time on mobile devices. To achieve real time capability with satisfying performance, the proposed method models the background through dual-mode single Gaussian model (SGM) with age and compensates the motion of the camera by mixing neighboring models. Modeling through dual-mode SGM prevents the background model from being contaminated by foreground pixels, while still allowing the model to be able to adapt to changes of the background. Mixing neighboring models reduces the errors arising from motion compensation and their influences are further reduced by keeping the age of the model. Also, to decrease computation load, the proposed method applies one dual-mode SGM to multiple pixels without performance degradation. Experimental results show the computational lightness and the real-time capability of our method on a smart phone with robust detection performances.


computer vision and pattern recognition | 2017

Action-Decision Networks for Visual Tracking with Deep Reinforcement Learning

Sangdoo Yun; Jongwon Choi; Youngjoon Yoo; Kimin Yun; Jin Young Choi

This paper proposes a novel tracker which is controlled by sequentially pursuing actions learned by deep reinforcement learning. In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale. The deep network to control actions is pre-trained using various training sequences and fine-tuned during tracking for online adaptation to target and background changes. The pre-training is done by utilizing deep reinforcement learning as well as supervised learning. The use of reinforcement learning enables even partially labeled data to be successfully utilized for semi-supervised learning. Through evaluation of the OTB dataset, the proposed tracker is validated to achieve a competitive performance that is three times faster than state-of-the-art, deep network–based trackers. The fast version of the proposed method, which operates in real-time on GPU, outperforms the state-of-the-art real-time trackers.


computer vision and pattern recognition | 2016

Visual Path Prediction in Complex Scenes with Crowded Moving Objects

Young Joon Yoo; Kimin Yun; Sangdoo Yun; Jonghee Hong; Hawook Jeong; Jin Young Choi

This paper proposes a novel path prediction algorithm for progressing one step further than the existing works focusing on single target path prediction. In this paper, we consider moving dynamics of co-occurring objects for path prediction in a scene that includes crowded moving objects. To solve this problem, we first suggest a two-layered probabilistic model to find major movement patterns and their cooccurrence tendency. By utilizing the unsupervised learning results from the model, we present an algorithm to find the future location of any target object. Through extensive qualitative/quantitative experiments, we show that our algorithm can find a plausible future path in complex scenes with a large number of moving objects.


international conference on image processing | 2015

Robust and fast moving object detection in a non-stationary camera via foreground probability based sampling

Kimin Yun; Jin Young Choi

This paper proposes a robust and fast scheme to detect moving objects in a non-stationary camera. The state-of-the art methods still do not give a satisfactory performance due to drastic frame changes in a non-stationary camera. To improve the robustness in performance, we additionally use the spatio-temporal properties of moving objects. We build the foreground probability map which reflects the spatio-temporal properties, then we selectively apply the detection procedure and update the background model only to the selected pixels using the foreground probability. The foreground probability is also used to refine the initial detection results to obtain a clear foreground region. We compare our scheme quantitatively and qualitatively to the state-of-the-art methods in the detection quality and speed. The experimental results show that our scheme outperforms all other compared methods.


international conference on pattern recognition | 2014

Motion Interaction Field for Accident Detection in Traffic Surveillance Video

Kimin Yun; Hawook Jeong; Kwang Moo Yi; Soo Wan Kim; Jin Young Choi

This paper presents a novel method for modeling of interaction among multiple moving objects to detect traffic accidents. The proposed method to model object interactions is motivated by the motion of water waves responding to moving objects on water surface. The shape of the water surface is modeled in a field form using Gaussian kernels, which is referred to as the Motion Interaction Field (MIF). By utilizing the symmetric properties of the MIF, we detect and localize traffic accidents without solving complex vehicle tracking problems. Experimental results show that our method outperforms the existing works in detecting and localizing traffic accidents.


Computer Vision and Image Understanding | 2014

Spatio-temporal weighting in local patches for direct estimation of camera motion in video stabilization

Soo Wan Kim; Shimin Yin; Kimin Yun; Jin Young Choi

This paper presents a robust video stabilization method by solving a novel formulation for the camera motion estimation. We introduce spatio-temporal weighting on local patches in optimization formulation, which enables one-step direct estimation without outlier elimination adopted in most existing methods. The spatio-temporal weighting represents the reliability of a local region in estimation of camera motion. The weighting emphasizes regions which have the similar motion to the camera motion, such as backgrounds, and reduces the influence of unimportant regions, such as moving objects. In this paper, we develop a formula to determine the spatio-temporal weights considering the age, edges, saliency, and distribution information of local patches. The proposed scheme reduces the computational load by eliminating the integration part of local motions and decreases accumulation of fitting errors in the existing two-step estimation methods. Through numerical experiments on several unstable videos, we verify that the proposed method gives better performance in camera motion estimation and stabilization of jittering video sequences.


Pattern Recognition Letters | 2017

Scene conditional background update for moving object detection in a moving camera

Kimin Yun; Jongin Lim; Jin Young Choi

This paper proposes a moving object detection algorithm adapting to various scene changes in a moving camera.Our method estimates three scene condition variables: background motion, foreground motion, and illumination changes.According to scene condition variables, our method builds a background model adaptively.Our method adapts itself to the dynamic scene changes and outperforms the state-of-the art methods. This paper proposes a moving object detection algorithm adapting to various scene changes in a moving camera. In the moving camera scene, both backgrounds and objects are moving while the level of illumination in general varies frequently. To handle these scene changes, we propose a scene conditional background update scheme that adaptively builds the background according to how the scene changes. First, we estimate the three scene condition variables of background motion, foreground motion and illumination changes for an awareness of the scene condition. We then compensate for the camera movement and update the background model in different ways according to the scene condition. Lastly, we propose a new foreground decision method with a foreground likelihood map, two thresholds, and a watershed algorithm to generate a spatially connected foreground region. We validate the effectiveness of our method quantitatively and qualitatively with ten videos in various scene conditions. The experimental results show that our method adapts itself to dynamic scene changes and outperforms state-of-the-art methods.


advanced video and signal based surveillance | 2015

Robust pan-tilt-zoom tracking via optimization combining motion features and appearance correlations

Byeongju Lee; Kimin Yun; Jongwon Choi; Jin Young Choi

This paper proposes a new pan-tilt-zoom (PTZ) tracking method to improve the robustness against occlusions and appearance changes by using motion likelihood map and scale change estimation as well as appearance correlation filter. For this purpose, we introduce a motion likelihood map constructed from motion detection result in addition to the correlation filter. The motion likelihood map is generated by blurring the motion detection result, which shows high probability in the center of target. To combine the correlation filter and the motion likelihood map, we formulate an optimization problem. In addition, to handle the scale change of target, we repeat the combining process for various scale of bounding box. The experiments show that the proposed method outperforms the state-of-the-art methods.


international symposium on visual computing | 2014

Learning with Adaptive Rate for Online Detection of Unusual Appearance

Kimin Yun; Jiyun Kim; Soo Wan Kim; Hawook Jeong; Jin Young Choi

Detecting of unusual/abnormal event is a popular research in the area of event analysis. Unlike conventional methods that focus on the motion, we tackle a new problem for detecting an unusual appearance in a surveillance video. However, in case of appearance feature, static appearance is so dominant that the biased learning problem can occur. To avoid this problem, we propose a new learning scheme with adaptive learning rate. Moreover, to reduce the noisy detection, we also suggest a spatio-temporal decision scheme. Experimental results show the effectiveness of the proposed method to detect unusual appearances qualitatively and quantitatively.

Collaboration


Dive into the Kimin Yun's collaboration.

Top Co-Authors

Avatar

Jin Young Choi

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Soo Wan Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Sangdoo Yun

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Hawook Jeong

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jongwon Choi

Systems Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jongin Lim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Kwang Moo Yi

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Youngjoon Yoo

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Byeongju Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Young Joon Yoo

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge