Tae-Hyun Oh
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tae-Hyun Oh.
international conference on computer vision | 2013
Tae-Hyun Oh; Hyeongwoo Kim; Yu-Wing Tai; Jean Charles Bazin; In So Kweon
Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values. The proposed objective function implicitly encourages the target rank constraint in rank minimization. Our experimental analyses show that our approach performs better than conventional rank minimization when the number of samples is deficient, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, photometric stereo and image alignment, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018
Tae-Hyun Oh; Yasuyuki Matsushita; Yu-Wing Tai; In So Kweon
Rank minimization can be converted into tractable surrogate problems, such as Nuclear Norm Minimizationxa0(NNM) and Weighted NNMxa0(WNNM). The problems related to NNM, or WNNM, can be solved iteratively by applying a closed-form proximal operator, called Singular Value Thresholdingxa0(SVT), or Weighted SVT, but they suffer from high computational cost of Singular Value Decompositionxa0(SVD) at each iteration. We propose a fast and accurate approximation method for SVT, that we call fast randomized SVT (FRSVT), with which we avoid direct computation of SVD. The key idea is to extract an approximate basis for the range of the matrix from its compressed matrix. Given the basis, we compute partial singular values of the original matrix from the small factored matrix. In addition, by developping a range propagation method, our method further speeds up the extraction of approximate basis at each iteration. Our theoretical analysis shows the relationship between the approximation bound of SVD and its effect to NNM via SVT. Along with the analysis, our empirical results quantitatively and qualitatively show that our approximation rarely harms the convergence of the host algorithms. We assess the efficiency and accuracy of the proposed method on various computer vision problems, e.g.,xa0subspace clustering, weather artifact removal, and simultaneous multi-image alignment and rectification.
european conference on computer vision | 2012
Jaesik Park; Tae-Hyun Oh; Jiyoung Jung; Yu-Wing Tai; In So Kweon
We introduce a framework to estimate and refine 3D scene flow which connects 3D structures of a scene across different frames. In contrast to previous approaches which compute 3D scene flow that connects depth maps from a stereo image sequence or from a depth camera, our approach takes advantage of full 3D reconstruction which computes the 3D scene flow that connects 3D point clouds from multi-view stereo system. Our approach uses a standard multi-view stereo and optical flow algorithm to compute the initial 3D scene flow. A unique two-stage refinement process regularizes the scene flow direction and magnitude sequentially. The scene flow direction is refined by utilizing 3D neighbor smoothness defined by tensor voting. The magnitude of the scene flow is refined by connecting the implicit surfaces across the consecutive 3D point clouds. Our estimated scene flow is temporally consistent. Our approach is efficient, model free, and it is effective in error corrections and outlier rejections. We tested our approach on both synthetic and real-world datasets. Our experimental results show that our approach out-performs previous algorithms quantitatively on synthetic dataset, and it improves the reconstructed 3D model from the refined 3D point cloud in real-world dataset.
intelligent robots and systems | 2012
Dong-Geol Choi; Inwook Shim; Yunsu Bok; Tae-Hyun Oh; In So Kweon
Building maps of unknown environments is a critical factor for autonomous navigation and homing, and this problem is especially challenging in large-scale environments. Recently, sensor fusion systems such as combinations of cameras and laser sensors have become popular in the effort to ensure a general level of performance in this task. In this paper, we present a new homing method in a large-scale environment using a laser-camera fusion system. Instead of fusing data to form a single map builder, we adaptively select sensor data to handle environments which contain ambiguity. For autonomous homing, we propose a new mapping strategy for building a hybrid map and a return strategy for selecting the next target waypoints efficiently. The experimental results demonstrate that the proposed algorithm enables the autonomous homing of a robot in a large-scale indoor environments in real time.
IEEE Signal Processing Letters | 2018
Hyowon Ha; Tae-Hyun Oh; In So Kweon
The introduction of small motion techniques such as small angle rotation approximation has enabled the three-dimensional reconstruction from a small motion of a camera, so-called structure from small motion (SfSM). In this letter, we propose a closed-form solution dedicated to the rotation estimation problem in SfSM. We show that our method works with a minimal set of two points, and has mild conditions to produce a unique optimal solution in practice. Also, we introduce a three-step SfSM pipeline with better convergence and faster speed compared to the state-of-the-art SfSM approaches. The key to this improvement is the separated estimation of the rotation with the proposed two-point method in order to handle the bas-relief ambiguity that affects the convergence of the bundle adjustment. We demonstrate the effectiveness of our two-point minimal solution and the three-step SfSM approach in synthetic and real-world experiments under the small motion regime.
computer vision and pattern recognition | 2018
Arda Senocak; Tae-Hyun Oh; Jun-Sik Kim; Ming-Hsuan Yang; In So Kweon
international conference on computer vision | 2017
Donghyeon Cho; Jinsun Park; Tae-Hyun Oh; Yu-Wing Tai; In So Kweon
neural information processing systems | 2016
Tae-Hyun Oh; Yasuyuki Matsushita; In So Kweon; David P. Wipf
Archive | 2014
Ji Ho Kim; Hyun Nam Lee; Soon Min Bae; Soon Min Hwang; Jong Won Choi; Joon Young Lee; Tae-Hyun Oh; In So Kweon
computer vision and pattern recognition | 2018
Arda Senocak; Tae-Hyun Oh; Jun-Sik Kim; In So Kweon