Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sungwook Cho is active.

Publication


Featured researches published by Sungwook Cho.


Journal of Intelligent and Robotic Systems | 2013

Vision-Based Detection and Tracking of Airborne Obstacles in a Cluttered Environment

Sungwook Cho; Sungsik Huh; David Hyunchul Shim; Hyoung Sik Choi

This paper proposes an image processing algorithm for ‘sense-and-avoid’ of aerial vehicles in short-range at low altitude and shows flight experiment results. Since it can suppress the negative effects cause cluttered environment in image sequence such as the ground seen or sensitivity of threshold value during low-altitude flight, proposed algorithm has better performance of collision avoidance. Furthermore, proposed algorithm can perform better than simple color-based detection and tracking methods because it takes the characteristics of vehicle dynamics into account in image plane. The performance of proposed algorithm is validated by post image processing using video clip taken from flight test and actual flight test with simple avoidance maneuver.


AIAA Guidance, Navigation, and Control (GNC) Conference | 2013

A Comprehensive Flight Control Design and Experiment of a Tail-Sitter UAV

Yeondeuk Jung; Sungwook Cho; David Hyunchul Shim

There have been ongoing interests in a type of aircraft that are capable of vertical takeoff/landing (VTOL) for greater operability and high-speed horizontal flight capability for maximal mission range. A possible solution for such application is tail-sitters, which takes off vertically and transitions into a horizontal flight. During the entire mission of a tail-sitter from take-off to landing, it goes through largely varying dynamic characteristics. In this paper, we propose a set of controllers for horizontal, vertical, and transition flight regimes. Especially, for transition, in conjunction with conventional multi-loop feedback, we use L1 adaptive control to supplement the linear controllers. The proposed controller were first validated with simulation models and then validated in actual flight tests to successfully demonstrate its capability to control the vehicle over the entire operating range.


conference on decision and control | 2011

An image processing algorithm for detection and tracking of aerial vehicles

Sungwook Cho; Sungsik Huh; Hyong Sik Choi; David Hyunchul Shim

This paper proposes an image processing algorithm for detection and tracking of aerial vehicles in sight. The proposed algorithm detects moving objects using the image homography calculated from a video stream taken from the onboard camera and determines whether the detected objects are approaching aerial vehicles by the Probabilistic Multi- Hypothesis Tracking (PMHT) method. This algorithm performs well especially when it is needed to detect any approaching aircraft seen with cluttered background. Further, our algorithm is suitable for real flight application as it is less sensitive to light conditions or color variations. The performance of the proposed algorithm is validated by applying it to the onboard video clips taken during actual flights using two unmanned aerial vehicles.


Journal of The Korean Society for Aeronautical & Space Sciences | 2011

An Image Processing Algorithm for Detection and Tracking of Aerial Vehicles in Short-Range

Sungwook Cho; Sungsik Huh; Hyunchul Shim; Hyoung-Sik Choi

This paper proposes an image processing algorithms for detection and tracking of aerial vehicles in short-range. Proposed algorithm detects moving objects by using image homography calculated from consecutive video frames and determines whether the detected objects are approaching aerial vehicles by the Probabilistic Multi-Hypothesis Tracking method(PMHT). This algorithm can perform better than simple color-based detection methods since it can detect moving objects under complex background such as the ground seen during low altitude flight and consider the characteristics of vehicle dynamics. Furthermore, it is effective for the flight test due to the reduction of thresholding sensitivity against external factors. The performance of proposed algorithm is verified by applying to the onboard video obtained by flight test.


international conference on unmanned aircraft systems | 2015

A trajectory-tracking controller design using L 1 adaptive control for multi-rotor UAVs

Yeundeuk Jung; Sungwook Cho; David Hyunchul Shim

This paper presents a trajectory tracking controller for multi-rotor UAVs to improve their flight performance in the presence of various uncertainties. The proposed tracking system consists of a velocity guidance law based on relative distance and an L1 adaptive augmentation loop for tracking velocity commands. In the proposed structure, the desired velocity generated by the guidance law is the reference value for the adaptive controller for accurate path tracking. In the guidance law, the desired acceleration is generated based on the relative distance and its derivatives, and the velocity command of the inner control loop is then calculated. The L1 augmentation loop compensates the linear controller to guarantee the flight performance parameters such as tracking accuracy in the presence of uncertainties, which include aerodynamic disturbances, modeling error, outdoor environmental factors, and flight dynamics changes. The proposed controller was validated in actual flight tests to successfully demonstrate its capabilities using a quad-rotor UAV.


Journal of Field Robotics | 2018

A direct visual servoing-based framework for the 2016 IROS Autonomous Drone Racing Challenge

Sunggoo Jung; Sungwook Cho; Dasol Lee; Hanseob Lee; David Hyunchul Shim

This paper presents a framework for navigating in obstacle-dense environments as posed in the 2016 International Conference on Intelligent Robots and Systems (IROS) Autonomous Drone Racing Challenge. Our framework is based on direct visual servoing and leg-by-leg planning to navigate in a complex environment filled with many similar frame-shaped obstacles to fly through. Our indoor navigation method relies on the velocity measurement by an optical flow sensor since the position measurements from GPS or external cameras are not available. For precision navigation through a sequence of obstacles, a center point–matching method is used with the depth information from the onboard stereo camera. The guidance points is directly generated in three-dimensional space using the two-dimensional image data to avoid accumulating the error from the sensor drift. The proposed framework is implemented on a quadrotor-based aerial vehicle, which carries an onboard vision-processing computer for self-contained operation. Using the proposed method, our drone was able to finished in first place in the world-premier IROS Autonomous Drone Racing Challenge.


IEEE Transactions on Aerospace and Electronic Systems | 2015

Vision-based sense-and-avoid framework for unmanned aerial vehicles

Sungsik Huh; Sungwook Cho; Yeondeuk Jung; David Hyunchul Shim

This paper describes a vision-based sense-and-avoid framework to detect approaching aircraft especially observed with cluttered background. The proposed framework consists of a vision system with a camera that processes the incoming images using a series of algorithms in real time to isolate moving aerial objects on the image plane and classify them using a particle filter. Once an approaching aerial object has been detected on a potential collision course, the aircraft performs an evasive maneuver. The performance of the proposed sense-and-avoid algorithm is validated in a series of test flights using two unmanned aerial vehicles.


AIAA Science and Technology Forum and Exposition (SciTech) 2017 | 2017

Gaussian Process-based Visual Servoing Framework for an Aerial Parallel Manipulator

Sungwook Cho; David Hyunchul Shim; Jinwhan Kim

This paper proposes a Gaussian process-based visual servoing framework that not only overcome the weakness of the standard image-based visual servoing (IBVS) scheme but also improve the vision-enabled and real-time control performance. In particular, the proposed framework provides the Gaussian process-based sampled image path consisting of a set of the references between the initial and desired positions of a stationary or moving target in the image plane with respect to the height under the large translation and rotation error in order to overcome the weakness of the standard IBVS scheme. Furthermore, we applied the proposed framework to the aerial parallel manipulator during the picking-n-replacing mission. The developed vehicle has two vision system: one is the gimbal-stabilized pinhole camera on the host vehicle and the other is the fisheye camera with the one-dimensional light detection and ranging (LiDAR) fixed on the end-effector of the parallel manipulator. The proposed framework can generate the feasible control input according to each sensor system`s features. In this paper, the preliminary results and analysis are represented. The results of simulations and flight tests will be conducted to verify the performance of the proposed framework indicate that it can return the path points for the convergence of the desired position while the target is moving or stationary, even with the large scale difference.


Medical Physics | 2016

SU-F-J-114: On-Treatment Imagereconstruction Using Transit Images of Treatment Beams Through Patient and Thosethrough Planning CT Images

Hyunsuk Lee; K Cheong; J Jung; Sungwook Cho; Sun-tae Jung; J. Kim; I Yeo

PURPOSE To reconstruct patient images at the time of radiation delivery using measured transit images of treatment beams through patient and calculated transit images through planning CT images. METHODS We hypothesize that the ratio of the measured transit images to the calculated images may provide changed amounts of the patient image between times of planning CT and treatment. To test, we have devised lung phantoms with a tumor object (3-cm diameter) placed at iso-center (simulating planning CT) and off-center by 1 cm (simulating treatment). CT images of the two phantoms were acquired; the image of the off-centered phantom, unavailable clinically, represents the reference on-treatment image in the image quality of planning CT. Cine-transit images through the two phantoms were also acquired in EPID from a non-modulated 6 MV beam when the gantry was rotated 360 degrees; the image through the centered phantom simulates calculated image. While the current study is a feasibility study, in reality our computational EPID model can be applicable in providing accurate transit image from MC simulation. Changed MV HU values were reconstructed from the ratio between two EPID projection data, converted to KV HU values, and added to the planning CT, thereby reconstructing the on-treatment image of the patient limited to the irradiated region of the phantom. RESULTS The reconstructed image was compared with the reference image. Except for local HU differences>200 as a maximum, excellent agreement was found. The average difference across the entire image was 16.2 HU. CONCLUSION We have demonstrated the feasibility of a method of reconstructing on-treatment images of a patient using EPID image and planning CT images. Further studies will include resolving the local HU differences and investigation on the dosimetry impact of the reconstructed image.


AIAA Guidance, Navigation, and Control Conference | 2015

Image-based Visual Servoing Framework for a Multirotor UAV using Sampling-based Path Planning

Sungwook Cho; Dasol Lee; David Hyunchul Shim

This paper proposes a novel image-based visual servoing (IBVS) optimal path-planning framework and reference feature tracking scheme that not only overcome the deficiencies of traditional image-based visual servoing approaches but also improve control performance and guarantee optimality. In particular, the proposed framework provides optimal path points between the initial and desired positions of a target, called the references, by adapting the rapidly-exploring random tree (RRT*) algorithm in order to overcome the drawbacks inherent in conventional methods. In contrast to existing techniques that generate control input using the exponentially decreasing error task function depending on the initial and desired features in the image plane only, our proposed framework generates control input using several reference features located between the initial and desired features generated by the optimal path-planning results. Consequently, it can produce relatively small and bounded control inputs that facilitate better performance in large pose difference environments. One of the major advantages of our proposed framework over existing methods is that it can generate feasible maneuvers from the results of the optimal feature path planning. It can also prevent singularities and local minima because it can maintain a small value for errors using a set of reference features. The results of simulations conducted to verify the performance of the proposed framework indicate that it can return the path points for the convergence of the initial position with the desired position, even with pixel position error at the target and relative position misalignment.

Collaboration


Dive into the Sungwook Cho's collaboration.

Top Co-Authors

Avatar

Hyoung Sik Choi

Korea Aerospace Research Institute

View shared research outputs
Top Co-Authors

Avatar

Heemin Shin

Korea Aerospace Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge