Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyungdon Joo is active.

Publication


Featured researches published by Kyungdon Joo.


international conference on computer vision | 2015

High Quality Structure from Small Motion for Rolling Shutter Cameras

Sunghoon Im; Hyowon Ha; Gyeongmin Choe; Hae-Gon Jeon; Kyungdon Joo; In So Kweon

We present a practical 3D reconstruction method to obtain a high-quality dense depth map from narrow-baseline image sequences captured by commercial digital cameras, such as DSLRs or mobile phones. Depth estimation from small motion has gained interest as a means of various photographic editing, but important limitations present themselves in the form of depth uncertainty due to a narrow baseline and rolling shutter. To address these problems, we introduce a novel 3D reconstruction method from narrow-baseline image sequences that effectively handles the effects of a rolling shutter that occur from most of commercial digital cameras. Additionally, we present a depth propagation method to fill in the holes associated with the unknown pixels based on our novel geometric guidance model. Both qualitative and quantitative experimental results show that our new algorithm consistently generates better 3D depth maps than those by the state-of-the-art method.


Journal of Field Robotics | 2017

Robot System of DRC‐HUBO+ and Control Strategy of Team KAIST in DARPA Robotics Challenge Finals

Jeongsoo Lim; In-Ho Lee; Inwook Shim; Hyobin Jung; Hyun Min Joe; Hyoin Bae; Okkee Sim; Jaesung Oh; Taejin Jung; Seunghak Shin; Kyungdon Joo; Mingeuk Kim; Kangkyu Lee; Yunsu Bok; Dong-Geol Choi; Buyoun Cho; Sungwoo Kim; Jung-Woo Heo; Inhyeok Kim; Jungho Lee; In So Kwon; Jun-Ho Oh

This paper summarizes how Team KAIST prepared for the DARPA Robotics Challenge (DRC) Finals, especially in terms of the robot system and control strategy. To imitate the Fukushima nuclear disaster situation, the DRC performed a total of eight tasks and degraded communication conditions. This competition demanded various robotic technologies such as manipulation, mobility, telemetry, autonomy, localization, etc. Their systematic integration and the overall system robustness were also important issues in completing the challenge. In this sense, this paper presents a hardware and software system for the DRC-HUBO+, a humanoid robot that was used for the DRC; it also presents control methods such as inverse kinematics, compliance control, a walking algorithm, and a vision algorithm, all of which were implemented to accomplish the tasks. The strategies and operations for each task are briefly explained with vision algorithms. This paper summarizes what we learned from the DRC before the conclusion. In the competition, 25 international teams participated with their various robot platforms. We competed in this challenge using the DRC-HUBO+ and won first place in the competition.


IEEE Transactions on Visualization and Computer Graphics | 2016

A Real-Time Augmented Reality System to See-Through Cars

Francois Rameau; Hyowon Ha; Kyungdon Joo; Jinsoo Choi; Kibaek Park; In So Kweon

One of the most hazardous driving scenario is the overtaking of a slower vehicle, indeed, in this case the front vehicle (being overtaken) can occlude an important part of the field of view of the rear vehicles driver. This lack of visibility is the most probable cause of accidents in this context. Recent research works tend to prove that augmented reality applied to assisted driving can significantly reduce the risk of accidents. In this paper, we present a real-time marker-less system to see through cars. For this purpose, two cars are equipped with cameras and an appropriate wireless communication system. The stereo vision system mounted on the front car allows to create a sparse 3D map of the environment where the rear car can be localized. Using this inter-car pose estimation, a synthetic image is generated to overcome the occlusion and to create a seamless see-through effect which preserves the structure of the scene.


international conference on image processing | 2015

Line meets as-projective-as-possible image stitching with moving DLT

Kyungdon Joo; Namil Kim; Tae-Hyun Oh; In So Kweon

We propose a spatially varying stitching method with line correspondences. We are motivated by the observation that point features could be spatially biased or not matched in practice, e.g., repeated textures or homogeneous regions of man-made structures. In this scenario, line matches can provide strong correspondences as well as supplement cues, such as the structure preserving property. With these advantages, we adopt a feature fusion method that combines point and line correspondences into a unified framework for spatially varying stitching. We then estimate the balancing parameter between the point and line terms using geometric error. Our experiments show accurate alignment for challenging but common cases.


international conference on computer vision | 2015

Accurate Camera Calibration Robust to Defocus Using a Smartphone

Hyowon Ha; Yunsu Bok; Kyungdon Joo; Jiyoung Jung; In So Kweon

We propose a novel camera calibration method for defocused images using a smartphone under the assumption that the defocus blur is modeled as a convolution of a sharp image with a Gaussian point spread function (PSF). In contrast to existing calibration approaches which require well-focused images, the proposed method achieves accurate camera calibration with severely defocused images. This robustness to defocus is due to the proposed set of unidirectional binary patterns, which simplifies 2D Gaussian deconvolution to a 1D Gaussian deconvolution problem with multiple observations. By capturing the set of patterns consecutively displayed on a smartphone, we formulate the feature extraction as a deconvolution problem to estimate feature point locations in sub-pixel accuracy and the blur kernel in each location. We also compensate the error in camera parameters due to refraction of the glass panel of the display device. We evaluate the performance of the proposed method on synthetic and real data. Even under severe defocus, our method shows accurate camera calibration result.


ieee-ras international conference on humanoid robots | 2015

Control strategies for a humanoid robot to drive and then egress a utility vehicle for remote approach

Hyobin Jeong; Jaesung Oh; Mingeuk Kim; Kyungdon Joo; In So Kweon; Jun-Ho Oh

This paper proposes strategies for the driving and egress of a vehicle with a humanoid robot. To drive the vehicle, the RANSAC method was used to detect obstacles, and the Wagon model was used to control the steering and velocity of the vehicle with only a limited number of sensors which were installed on the humanoid robot. Additionally, a manual tele-operating method was used with the lane projection technique. For the egress motion, gain override and the Cartesian position/force control technique were used to interact with the vehicle structure. To overcome the disadvantages of a highly geared manipulator, a special technique was used that included modelled friction compensation and a non-complementary switching mode. DRC-HUBO+ used the proposed method to perform a vehicle driving and egress task in the DRC finals 2015.


international conference on robotics and automation | 2016

Vision system and depth processing for DRC-HUBO+

Inwook Shim; Seunghak Shin; Yunsu Bok; Kyungdon Joo; Dong-Geol Choi; Joon-Young Lee; Jaesik Park; Jun-Ho Oh; In So Kweon

This paper presents a vision system and a depth processing algorithm for DRC-HUBO+, the winner of the DRC finals 2015. Our system is designed to reliably capture 3D information of a scene and objects and to be robust to challenging environment conditions. We also propose a depth-map upsampling method that produces an outliers-free depth map by explicitly handling depth outliers. Our system is suitable for robotic applications in which a robot interacts with the real-world, requiring accurate object detection and pose estimation. We evaluate our depth processing algorithm in comparison with state-of-the-art algorithms on several synthetic and real-world datasets.


computer vision and pattern recognition | 2016

Globally Optimal Manhattan Frame Estimation in Real-Time

Kyungdon Joo; Tae-Hyun Oh; Jun-Sik Kim; In So Kweon

Given a set of surface normals, we pose a Manhattan Frame (MF) estimation problem as a consensus set maximization that maximizes the number of inliers over the rotation search space. We solve this problem through a branchand-bound framework, which mathematically guarantees a globally optimal solution. However, the computational time of conventional branch-and-bound algorithms are intractable for real-time performance. In this paper, we propose a novel bound computation method within an efficient measurement domain for MF estimation, i.e., the extended Gaussian image (EGI). By relaxing the original problem, we can compute the bounds in real-time, while preserving global optimality. Furthermore, we quantitatively and qualitatively demonstrate the performance of the proposed method for synthetic and real-world data. We also show the versatility of our approach through two applications: extension to multiple MF estimation and video stabilization.


Pattern Recognition Letters | 2018

Efficient adaptive non-maximal suppression algorithms for homogeneous spatial keypoint distribution

Oleksandr Bailo; Francois Rameau; Kyungdon Joo; Jinsun Park; Oleksandr Bogdan; In So Kweon

Abstract Keypoint detection usually results in a large number of keypoints which are mostly clustered, redundant, and noisy. These keypoints often require special processing like Adaptive Non-Maximal Suppression (ANMS) to retain the most relevant ones. In this paper, we present three new efficient ANMS approaches which ensure a fast and homogeneous repartition of the keypoints in the image. For this purpose, a square approximation of the search range to suppress irrelevant points is proposed to reduce the computational complexity of the ANMS. To further speed up the proposed approaches, we also introduce a novel strategy to initialize the search range based on image dimension which leads to a faster convergence. An exhaustive survey and comparisons with already existing methods are provided to highlight the effectiveness and scalability of our methods and the initialization strategy.


international conference on ubiquitous robots and ambient intelligence | 2016

Human body part classification from optical flow

Jun-Sik Kim; Kyungdon Joo; Tae-Hyun Oh; In So Kweon

Body part (BP) classification is important in human image analysis. In this paper, we propose an optical flow based pixel-wise BP classifier using random forest (RF) in a monocular video. Running the BP classifier on each optical frame generates noisy mis classifications. We integrate the classifier with a human detector and temporal voting to reduce most of misclassified pixels. The proposed method is evaluated quantitatively on a real world dataset and showed better performance.

Collaboration


Dive into the Kyungdon Joo's collaboration.

Researchain Logo
Decentralizing Knowledge