Junghyun Kwon
Seoul National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junghyun Kwon.
computer vision and pattern recognition | 2009
Junghyun Kwon; Kyoung Mu Lee; Frank C. Park
We propose a geometric method for visual tracking, in which the 2-D affine motion of a given object template is estimated in a video sequence by means of coordinate-invariant particle filtering on the 2-D affine group Aff(2). Tracking performance is further enhanced through a geometrically defined optimal importance function, obtained explicitly via Taylor expansion of a principal component analysis based measurement function on Aff(2). The efficiency of our approach to tracking is demonstrated via comparative experiments.
Robotica | 2007
Junghyun Kwon; Minseok Choi; Frank C. Park; Changmook Chun
We address general filtering problems on the Euclidean group SE(3). We first generalize, to stochastic nonlinear systems evolving on SE(3), the particle filter of Liu and West for simultaneous estimation of the state and covariance. The filter is constructed in a coordinate-invariant way, and explicitly takes into account the geometry of SE(3) and P(n), the space of symmetric positive definite matrices. Some basic results for bilinear systems on SE(3) with linear and quadratic measurements are also derived. Three examples—GPS attitude estimation, needle tip location, and vision-based robot end-effector pose estimation—are presented to illustrate the framework.
The International Journal of Robotics Research | 2010
Junghyun Kwon; Frank C. Park
We present a particle filtering algorithm for visual tracking, in which the state equations for the object motion evolve on the two-dimensional affine group. We first formulate, in a coordinate-invariant and geometrically meaningful way, particle filtering on the affine group that allows for combined state—covariance estimation. Measurement likelihoods are also calculated from the image covariance descriptors using incremental principal geodesic analysis, a generalization of principal component analysis to curved spaces. Comparative visual tracking studies demonstrate the increased robustness of our tracking algorithm.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014
Junghyun Kwon; Hee Seok Lee; Frank C. Park; Kyoung Mu Lee
Existing approaches to template-based visual tracking, in which the objective is to continuously estimate the spatial transformation parameters of an object template over video frames, have primarily been based on deterministic optimization, which as is well-known can result in convergence to local optima. To overcome this limitation of the deterministic optimization approach, in this paper we present a novel particle filtering approach to template-based visual tracking. We formulate the problem as a particle filtering problem on matrix Lie groups, specifically the three-dimensional Special Linear group SL(3) and the two-dimensional affine group Aff(2). Computational performance and robustness are enhanced through a number of features: (i) Gaussian importance functions on the groups are iteratively constructed via local linearization; (ii) the inverse formulation of the Jacobian calculation is used; (iii) template resizing is performed; and (iv) parent-child particles are developed and used. Extensive experimental results using challenging video sequences demonstrate the enhanced performance and robustness of our particle filtering-based approach to template-based visual tracking. We also show that our approach outperforms several state-of-the-art template-based visual tracking methods via experiments using the publicly available benchmark data set.
computer vision and pattern recognition | 2010
Junghyun Kwon; Kyoung Mu Lee
We propose a novel geometric Rao-Blackwellized particle filtering framework for monocular SLAM with locally planar landmarks. We represent the states for the camera pose and the landmark plane normal as SE(3) and SO(3), respectively, which are both Lie groups. The measurement error is also represented as another Lie group SL(3) corresponding to the space of homography matrices. We then formulate the unscented transformation on Lie groups for optimal importance sampling and landmark estimation via unscented Kalman filter. The feasibility of our framework is demonstrated via various experiments.
computer vision and pattern recognition | 2017
Dinghuang Ji; Junghyun Kwon; Max E. McFarland; Silvio Savarese
Recently, convolutional neural networks (CNN) have been successfully applied to view synthesis problems. However, such CNN-based methods can suffer from lack of texture details, shape distortions, or high computational complexity. In this paper, we propose a novel CNN architecture for view synthesis called Deep View Morphing that does not suffer from these issues. To synthesize a middle view of two input images, a rectification network first rectifies the two input images. An encoder-decoder network then generates dense correspondences between the rectified images and blending masks to predict the visibility of pixels of the rectified images in the middle view. A view morphing network finally synthesizes the middle view using the dense correspondences and blending masks. We experimentally show the proposed method significantly outperforms the state-of-the-art CNN-based view synthesis method.
international conference on computer vision | 2011
Hee Seok Lee; Junghyun Kwon; Kyoung Mu Lee
Handling motion blur is one of important issues in visual SLAM. For a fast-moving camera, motion blur is an unavoidable effect and it can degrade the results of localization and reconstruction severely. In this paper, we present a unified algorithm to handle motion blur for visual SLAM, including the blur-robust data association method and the fast deblurring method. In our framework, camera motion and 3-D point structures are reconstructed by SLAM, and the information from SLAM makes the estimation of motion blur quite easy and effective. Reversely, estimating motion blur enables robust data association and drift-free localization of SLAM with blurred images. The blurred images are recovered by fast deconvolution using SLAM data, and more features are extracted and registered to the map so that the SLAM procedure can be continued even with the blurred images. In this way, visual SLAM and deblurring are solved simultaneously, and improve each others results significantly.
international conference on information and automation | 2008
Junghyun Kwon; Frank C. Park
We propose a particle filtering-based visual tracker, in which the affine group is treated as the state. We first develop a general particle filtering algorithm that explicitly takes into account the geometry of the affine group. The tracking performance is further enhanced by the geometric auto-regressive process used for the state dynamics, combined state-covariance estimation, and robust measurement likelihood calculation using the incremental principal geodesic analysis of the image covariance descriptors. The feasibility of our proposed visual tracker is demonstrated via experimental studies.
intelligent robots and systems | 2006
Junghyun Kwon; Frank C. Park
This paper proposes a hidden Markov model (HMM) based approach to generate human-like movements for humanoid robots. Given human motion capture data for a class of movements, principal components are extracted for each class, and used as basis elements that in turn represent more general movements within each class. A HMM is also designed and trained for each movement class using the movement data. Humanoid movement is then generated by selecting the linear combination of basis elements that yields the highest probability for the trained HMM, subject to user-specified movement boundary conditions. The feasibility of our proposed method is demonstrated via case studies of various arm motions
Image and Vision Computing | 2013
Young Ki Baik; Junghyun Kwon; Hee Seok Lee; Kyoung Mu Lee
Conventional particle filtering-based visual ego-motion estimation or visual odometry often suffers from large local linearization errors in the case of abrupt camera motion. The main contribution of this paper is to present a novel particle filtering-based visual ego-motion estimation algorithm that is especially robust to the abrupt camera motion. The robustness to the abrupt camera motion is achieved by multi-layered importance sampling via particle swarm optimization (PSO), which iteratively moves particles to higher likelihood region without local linearization of the measurement equation. Furthermore, we make the proposed visual ego-motion estimation algorithm in real-time by reformulating the conventional vector space PSO algorithm in consideration of the geometry of the special Euclidean group SE(3), which is a Lie group representing the space of 3-D camera poses. The performance of our proposed algorithm is experimentally evaluated and compared with the local linearization and unscented particle filter-based visual ego-motion estimation algorithms on both simulated and real data sets.