Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ayoung Kim is active.

Publication


Featured researches published by Ayoung Kim.


The International Journal of Robotics Research | 2012

Advanced perception, navigation and planning for autonomous in-water ship hull inspection

Franz S. Hover; Ryan M. Eustice; Ayoung Kim; Brendan J. Englot; Hordur Johannsson; Michael Kaess; John J. Leonard

Inspection of ship hulls and marine structures using autonomous underwater vehicles has emerged as a unique and challenging application of robotics. The problem poses rich questions in physical design and operation, perception and navigation, and planning, driven by difficulties arising from the acoustic environment, poor water quality and the highly complex structures to be inspected. In this paper, we develop and apply algorithms for the central navigation and planning problems on ship hulls. These divide into two classes, suitable for the open, forward parts of a typical monohull, and for the complex areas around the shafting, propellers and rudders. On the open hull, we have integrated acoustic and visual mapping processes to achieve closed-loop control relative to features such as weld-lines and biofouling. In the complex area, we implemented new large-scale planning routines so as to achieve full imaging coverage of all the structures, at a high resolution. We demonstrate our approaches in recent operations on naval ships.


IEEE Transactions on Robotics | 2013

Real-Time Visual SLAM for Autonomous Underwater Hull Inspection Using Visual Saliency

Ayoung Kim; Ryan M. Eustice

This paper reports a real-time monocular visual simultaneous localization and mapping (SLAM) algorithm and results for its application in the area of autonomous underwater ship hull inspection. The proposed algorithm overcomes some of the specific challenges associated with underwater visual SLAM, namely, limited field of view imagery and feature-poor regions. It does so by exploiting our SLAM navigation prior within the image registration pipeline and by being selective about which imagery is considered informative in terms of our visual SLAM map. A novel online bag-of-words measure for intra and interimage saliency are introduced and are shown to be useful for image key-frame selection, information-gain-based link hypothesis, and novelty detection. Results from three real-world hull inspection experiments evaluate the overall approach, including one survey comprising a 3.4-h/2.7-km-long trajectory.


intelligent robots and systems | 2009

Pose-graph visual SLAM with geometric model selection for autonomous underwater ship hull inspection

Ayoung Kim; Ryan M. Eustice

This paper reports the application of vision based simultaneous localization and mapping (SLAM) to the problem of autonomous ship hull inspection by an underwater vehicle. The goal of this work is to automatically map and navigate the underwater surface area of a ship hull for foreign object detection and maintenance inspection tasks. For this purpose we employ a pose-graph SLAM algorithm using an extended information filter for inference. For perception, we use a calibrated monocular camera system mounted on a tilt actuator so that the camera approximately maintains a nadir view to the hull. A combination of SIFT and Harris features detectors are used within a pairwise image registration framework to provide camera-derived relative-pose constraints (modulo scale). Because the ship hull surface can vary from being locally planar to highly three-dimensional (e.g., screws, rudder), we employ a geometric model selection framework to appropriately choose either an essential matrix or homography registration model during image registration. This allows the image registration engine to exploit geometry information at the early stages of estimation, which results in better navigation and structure reconstruction via more accurate and robust camera-constraints. Preliminary results are reported for mapping a 1,300 image data set covering a 30 m by 5 m section of the hull of a USS aircraft carrier. The post-processed result validates the algorithms potential to provide in-situ navigation in the underwater environment for trajectory control, while generating a texture-mapped 3D model of the ship hull as a byproduct for inspection.


The International Journal of Robotics Research | 2015

Active visual SLAM for robotic area coverage: Theory and experiment

Ayoung Kim; Ryan M. Eustice

This paper reports on an integrated navigation algorithm for the visual simultaneous localization and mapping (SLAM) robotic area coverage problem. In the robotic area coverage problem, the goal is to explore and map a given target area within a reasonable amount of time. This goal necessitates the use of minimally redundant overlap trajectories for coverage efficiency; however, visual SLAM’s navigation estimate will inevitably drift over time in the absence of loop closures. Therefore, efficient area coverage and good SLAM navigation performance represent competing objectives. To solve this decision-making problem, we introduce perception-driven navigation, an integrated navigation algorithm that automatically balances between exploration and revisitation using a reward framework. This framework accounts for SLAM localization uncertainty, area coverage performance, and the identification of good candidate regions in the environment for visual perception. Results are shown for both a hybrid simulation and real-world demonstration of a visual SLAM system for autonomous underwater ship hull inspection.


international conference on robotics and automation | 2013

Perception-driven navigation: Active visual SLAM for robotic area coverage

Ayoung Kim; Ryan M. Eustice

This paper reports on an integrated navigation algorithm for the visual simultaneous localization and mapping (SLAM) robotic area coverage problem. In the robotic area coverage problem, the goal is to explore and map a given target area in a reasonable amount of time. This goal necessitates the use of minimally redundant overlap trajectories for coverage efficiency; however, visual SLAMs navigation estimate will inevitably drift over time in the absence of loop-closures. Therefore, efficient area coverage and good SLAM navigation performance represent competing objectives. To solve this decision-making problem, we introduce perception-driven navigation (PDN), an integrated navigation algorithm that automatically balances between exploration and revisitation using a reward framework. This framework accounts for vehicle localization uncertainty, area coverage performance, and the identification of good candidate regions in the environment for loop-closure. Results are shown for a hybrid simulation using synthetic and real imagery from an autonomous underwater ship hull inspection application.


Marine Technology Society Journal | 2009

An Overview of Autonomous Underwater Vehicle Research and Testbed at PeRL

Hunter C. Brown; Ayoung Kim; Ryan M. Eustice

This article provides a general overview of the autonomous underwater vehicle (AUV) research thrusts being pursued within the Perceptual Robotics Laboratory (PeRL) at the University of Michigan. Founded in 2007, PeRL’s research centers on improving AUV autonomy via algorithmic advancements in environmentally based perceptual feedback for real-time mapping, navigation, and control. Our three major research areas are (1) real-time visual simultaneous localization and mapping (SLAM), (2) cooperative multi-vehicle navigation, and (3) perceptiondriven control. Pursuant to these research objectives, PeRL has developed a new multi-AUV SLAM testbed based upon a modified Ocean-Server Iver2 AUV platform. PeRL upgraded the vehicles with additional navigation and perceptual sensors for underwater SLAM research. In this article, we detail our testbed development, provide an overview of our major research thrusts, and put into context how our modified AUV testbed enables experimental real-world validation of these algorithms.


Journal of Field Robotics | 2016

Long-term Mapping Techniques for Ship Hull Inspection and Surveillance using an Autonomous Underwater Vehicle

Paul Ozog; Nicholas Carlevaris-Bianco; Ayoung Kim; Ryan M. Eustice

This paper reports on a system for an autonomous underwater vehicle to perform in situ, multiple session hull inspection using long-term simultaneous localization and mapping SLAM. Our method assumes very little a priori knowledge, and it does not require the aid of acoustic beacons for navigation, which is a typical mode of navigation in this type of application. Our system combines recent techniques in underwater saliency-informed visual SLAM and a method for representing the ship hull surface as a collection of many locally planar surface features. This methodology produces accurate maps that can be constructed in real-time on consumer-grade computing hardware. A single-session SLAM result is initially used as a prior map for later sessions, where the robot automatically merges the multiple surveys into a common hull-relative reference frame. To perform the relocalization step, we use a particle filter that leverages the locally planar representation of the ship hull surface, and a fast visual descriptor matching algorithm. Finally, we apply the recently developed graph sparsification tool, generic linear constraints, as a way to manage the computational complexity of the SLAM system as the robot accumulates information across multiple sessions. We show results for 20 SLAM sessions for two large vessels over the course of days, months, and even up to three years, with a total path length of approximately 10.2 km.


intelligent robots and systems | 2014

Opportunistic sampling-based planning for active visual SLAM

Stephen M. Chaves; Ayoung Kim; Ryan M. Eustice

This paper reports on an active visual SLAM path planning algorithm that plans loop-closure paths in order to decrease visual navigation uncertainty. Loop-closing revisit actions bound the robots uncertainty but also contribute to redundant area coverage and increased path length.We propose an opportunistic path planner that leverages sampling-based techniques and information filtering for planning revisit paths that are coverage efficient. Our algorithm employs Gaussian Process regression for modeling the prediction of camera registrations and uses a two-step optimization for selecting revisit actions. We show that the proposed method outperforms existing solutions for bounding navigation uncertainty with a hybrid simulation experiment using a real-world dataset collected by a ship hull inspection robot.


oceans conference | 2008

Development of a multi-AUV SLAM testbed at the University of Michigan

Hunter C. Brown; Ayoung Kim; Ryan M. Eustice

This paper reports the modifications involved in preparing two commercial Ocean-Server AUV systems for simultaneous localization and mapping (SLAM) research at the University of Michigan (UMich). The UMich Perceptual Robotics Laboratory (PeRL) upgraded the vehicles with additional navigation and perceptual sensors including 12-bit stereo down-looking Prosilica cameras, a Teledyne 600 kHz RDI Explorer DVL for 3-axis bottom-lock velocity measurements, a KVH single-axis fiber-optic gyroscope for yaw rate, and a WHOI Micromodem for communication, along with other sensor packages discussed forthwith. To accommodate the additional sensor payload, a new Delrin nose cone was designed and fabricated. Additional 32-bit embedded CPU hardware was added for data-logging, real-time control, and in-situ real-time SLAM algorithm testing and validation. Details of the design modification, and related research enabled by this integration effort, are discussed herein.


Sensors | 2016

Accurate Mobile Urban Mapping via Digital Map-Based SLAM

Hyun Chul Roh; Jinyong Jeong; Younggun Cho; Ayoung Kim

This paper presents accurate urban map generation using digital map-based Simultaneous Localization and Mapping (SLAM). Throughout this work, our main objective is generating a 3D and lane map aiming for sub-meter accuracy. In conventional mapping approaches, achieving extremely high accuracy was performed by either (i) exploiting costly airborne sensors or (ii) surveying with a static mapping system in a stationary platform. Mobile scanning systems recently have gathered popularity but are mostly limited by the availability of the Global Positioning System (GPS). We focus on the fact that the availability of GPS and urban structures are both sporadic but complementary. By modeling both GPS and digital map data as measurements and integrating them with other sensor measurements, we leverage SLAM for an accurate mobile mapping system. Our proposed algorithm generates an efficient graph SLAM and achieves a framework running in real-time and targeting sub-meter accuracy with a mobile platform. Integrated with the SLAM framework, we implement a motion-adaptive model for the Inverse Perspective Mapping (IPM). Using motion estimation derived from SLAM, the experimental results show that the proposed approaches provide stable bird’s-eye view images, even with significant motion during the drive. Our real-time map generation framework is validated via a long-distance urban test and evaluated at randomly sampled points using Real-Time Kinematic (RTK)-GPS.

Collaboration


Dive into the Ayoung Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyun-Taek Choi

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge