Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyon Lim is active.

Publication


Featured researches published by Hyon Lim.


IEEE Robotics & Automation Magazine | 2012

Build Your Own Quadrotor: Open-Source Projects on Unmanned Aerial Vehicles

Hyon Lim; Jaemann Park; Daewon Lee; Hak-Ju Kim

This article presents a survey on publicly available open-source projects (OSPs) on quadrotor unmanned aerial vehicles (UAVs). Recently, there has been increasing interest in quadrotor UAVs. Exciting videos have been published on the Internet by many research groups and have attracted much attention from the public [1][7]. Relatively simple structures of quadrotors has promoted interest from academia, UAV industries, and radio-control (RC) hobbyists alike. Unlike conventional helicopters, swashplates, which are prone to failure without constant maintenance, are not required. Furthermore, the diameter of individual rotors can be reduced as a result of the presence of four actuators [8].


computer vision and pattern recognition | 2012

Real-time image-based 6-DOF localization in large-scale environments

Hyon Lim; Sudipta N. Sinha; Michael F. Cohen; Matthew Uyttendaele

We present a real-time approach for image-based localization within large scenes that have been reconstructed offline using structure from motion (Sfm). From monocular video, our method continuously computes a precise 6-DOF camera pose, by efficiently tracking natural features and matching them to 3D points in the Sfm point cloud. Our main contribution lies in efficiently interleaving a fast keypoint tracker that uses inexpensive binary feature descriptors with a new approach for direct 2D-to-3D matching. The 2D-to-3D matching avoids the need for online extraction of scale-invariant features. Instead, offline we construct an indexed database containing multiple DAISY descriptors per 3D point extracted at multiple scales. The key to the efficiency of our method lies in invoking DAISY descriptor extraction and matching sparingly during localization, and in distributing this computation over a window of successive frames. This enables the algorithm to run in real-time, without fluctuations in the latency over long durations. We evaluate the method in large indoor and outdoor scenes. Our algorithm runs at over 30 Hz on a laptop and at 12 Hz on a low-power, mobile computer suitable for onboard computation on a quadrotor micro aerial vehicle.


IEEE-ASME Transactions on Mechatronics | 2013

Fully Autonomous Vision-Based Net-Recovery Landing System for a Fixed-Wing UAV

H. Jin Kim; Mingu Kim; Hyon Lim; Chul-Woo Park; Seungho Yoon; Daewon Lee; Hyun Jin Choi; Gyeongtaek Oh; Jong-Ho Park; Youdan Kim

This paper presents an autonomous vision-based netrecovery system for small fixed-wing unmanned aerial vehicles (UAVs). A fixed-wing UAV platform is constructed using various avionic sensors, and integrated with a flight control system and a vision system. The ground operation system consists of a vision station and ground control station that provide operation commands and monitor the UAV status. The vision algorithm to detect the recovery net and provide the bearing angle to the guidance algorithm is explained, along with the discussions on the techniques employed to improve the reliability of visual detection. The system identification process and controller are described, which enables to track given waypoints and to approach the detected net under the pursuit guidance law. Experimental results show the autonomous capabilities including take-off, waypoint following, and vision-based net recovery. The proposed technique can be an effective solution to recover fixed-wing UAVs without resorting to a complicated structure such as an instrumented landing system or expensive sensors such as a differential GPS.


Journal of Guidance Control and Dynamics | 2012

Adaptive Image-Based Visual Servoing for an Underactuated Quadrotor System

Daewon Lee; Hyon Lim; H. Jin Kim; Youdan Kim; Kie Jeong Seong

This paper presents an adaptive image-based visual servoing (IBVS) integratedwith adaptive slidingmode control for a vision-based operation of a quadrotor unmanned aerial vehicle (UAV). For a seamless integration with underactuated quadrotor dynamics, roll and pitch channels are decoupled from the other channels using virtual features. This allows a simple and accurate algorithm for estimating depth information and successful application of the proposed guidance and control algorithm.By employing an adaptive gain in the IBVS controlmethod, the chance of image feature loss is reduced, and performance and stability of the vision-guided UAV control system are improved. The overall setup allows image features to be placed at the desired position in the image plane of a camera mounted on the quadrotor UAV. Stability of the IBVS systemwith the controller is proved using Lyapunov stability analysis. Performance of the overall approach is validated by numerical simulation, vision integrated hardware-inthe-loop simulation, and experiments. The results confirm that the target image is successfully placed at the desired position of the image plane and the quadrotor state variables are properly regulated, showing robustness in the presence of sensor noise, parametric uncertainty, and vibration from motors.


intelligent robots and systems | 2012

Onboard flight control of a micro quadrotor using single strapdown optical flow sensor

Hyon Lim; Hyeonbeom Lee; H. Jin Kim

This paper considers autonomous onboard hovering flight control of a micro quadrotor using a strapdown optical flow sensor which is conventionally used for desktop mice. The vehicle considered in this paper can carry only a few dozen grams of payload, therefore conventional camera-based optical flow methods are not applicable. We present autonomous hovering flight control of the micro quadrotor using a single-chip optical flow sensor, implemented on an 8-bit microprocessor without external positioning sensors. Detailed description of all the system components is provided along with evaluation of the accuracy. Experimental results from flight tests are validated with the ground-truth data provided by a high-accuracy motion capture system.


international conference on robotics and automation | 2014

Real-time 6-DOF monocular visual SLAM in a large-scale environment

Hyon Lim; Jongwoo Lim; H. Jin Kim

Real-time approach for monocular visual simultaneous localization and mapping (SLAM) within a large-scale environment is proposed. From a monocular video sequence, the proposed method continuously computes the current 6-DOF camera pose and 3D landmarks position. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor, while existing approaches have utilized additional structural information such as camera height from the ground. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor environment without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences including the KITTI dataset and indoor video captured on a micro aerial vehicle.


international conference on robotics and automation | 2014

Aerodynamic power control for multirotor aerial vehicles

Moses Bangura; Hyon Lim; H. Jin Kim; Robert E. Mahony

In this paper, a new motor control input and controller for small-scale electrically powered multirotor aerial vehicles is proposed. The proposed scheme is based on controlling aerodynamic power as opposed to the rotor speed of each motor-rotor system. Electrical properties of the brushless direct current motor are used to both estimate and control the mechanical power of the motor system which is coupled with aerodynamic power using momentum theory analysis. In comparison to current state-of-the-art motor control for multirotor aerial vehicles, the proposed approach is robust to unmodelled aerodynamic effects such as wind disturbances and ground effects. Theory and experimental results are presented to illustrate the performance of the proposed motor control.


international conference on robotics and automation | 2015

Monocular Localization of a moving person onboard a Quadrotor MAV

Hyon Lim; Sudipta N. Sinha

In this paper, we propose a novel method to recover the 3D trajectory of a moving person from a monocular camera mounted on a quadrotor micro aerial vehicle (MAV). The key contribution is an integrated approach that simultaneously performs visual odometry (VO) and persistent tracking of a person automatically detected in the scene. All computation pertaining to VO, detection and tracking runs onboard the MAV from a front-facing monocular RGB camera. Given the gravity direction from an inertial sensor and the knowledge of the individuals height, a complete 3D trajectory of the person within the reconstructed scene can be estimated. When the ground plane is detected from the triangulated 3D points, the absolute metric scale of the trajectory and the 3D map is also recovered. Our extensive indoor and outdoor experiments show that the system can localize a person moving naturally within a large area. The system runs at 17 frames per second on the onboard computer. A walking person was successfully tracked for two minutes and an accurate trajectory was recovered over a distance of 140 meters with our system running onboard.


The International Journal of Robotics Research | 2015

Real-time monocular image-based 6-DoF localization

Hyon Lim; Sudipta N. Sinha; Michael F. Cohen; Matt Uyttendaele; H. Jin Kim

In this paper we present a new real-time image-based localization method for scenes that have been reconstructed offline using structure from motion. From input video, our method continuously computes six-degree-of-freedom camera pose estimates by efficiently tracking natural features and matching them to 3D points reconstructed by structure from motion. Our main contribution lies in efficiently interleaving a fast keypoint tracker that uses inexpensive binary feature descriptors with a new approach for direct 2D-to-3D matching. Our 2D-to-3D matching scheme avoids the need for online extraction of scale-invariant features. Instead, offline we construct an indexed database containing multiple DAISY descriptors per 3D point extracted at multiple scales. The key to the efficiency of our method is invoking DAISY descriptor extraction and matching sparingly during localization, and in distributing this computation over a temporal window of successive frames. This enables the system to run in real-time and achieve low per-frame latency over long durations. Our algorithm runs at over 30 Hz on a laptop and at 12 Hz on a low-power computer suitable for onboard computation on a mobile robot such as a micro-aerial vehicle. We have evaluated our method using ground truth and present results on several challenging indoor and outdoor sequences.


conference on decision and control | 2011

Obstacle avoidance using image-based visual servoing integrated with nonlinear model predictive control

Daewon Lee; Hyon Lim; H. Jin Kim

This paper proposes a vision-based obstacle avoidance strategy in a dynamic environment for a fixed-wing unmanned aerial vehicle (UAV). In order to apply a nonlinear model predictive control (NMPC) framework to image-based visual servoing (IBVS), a dynamic model from UAV control input to image features is derived. From this dynamics, a visual information-based obstacle avoidance strategy in an unknown environment is proposed. When a vision system is employed on a UAV, it is easy to lose visibility of the target in the image plane due to its maneuvering. To address this issue, a visibility constraint is considered in the NMPC framework. The advantage of the proposed method is that the constraints (e.g., visibility maintaining, actuator saturation) can be modeled and solved in a unified framework. Numerical simulations on a UAV model show satisfactory results in reference tracking and obstacle avoidance maneuvers with the constraints.

Collaboration


Dive into the Hyon Lim's collaboration.

Top Co-Authors

Avatar

H. Jin Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Daewon Lee

Seoul National University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pyojin Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Hyeonbeom Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Youdan Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Moses Bangura

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Robert E. Mahony

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge