Katherine A. Skinner
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Katherine A. Skinner.
international conference on robotics and automation | 2018
Jie Li; Katherine A. Skinner; Ryan M. Eustice; Matthew Johnson-Roberson
This letter reports on WaterGAN, a generative adversarial network (GAN) for generating realistic underwater images from in-air image and depth pairings in an unsupervised pipeline used for color correction of monocular underwater images. Cameras onboard autonomous and remotely operated vehicles can capture high-resolution images to map the seafloor; however, underwater image formation is subject to the complex process of light propagation through the water column. The raw images retrieved are characteristically different than images taken in air due to effects, such as absorption and scattering, which cause attenuation of light at different rates for different wavelengths. While this physical process is well described theoretically, the model depends on many parameters intrinsic to the water column as well as the structure of the scene. These factors make recovery of these parameters difficult without simplifying assumptions or field calibration; hence, restoration of underwater images is a nontrivial problem. Deep learning has demonstrated great success in modeling complex nonlinear systems but requires a large amount of training data, which is difficult to compile in deep sea environments. Using WaterGAN, we generate a large training dataset of corresponding depth, in-air color images, and realistic underwater images. These data serve as input to a two-stage network for color correction of monocular underwater images. Our proposed pipeline is validated with testing on real data collected from both a pure water test tank and from underwater surveys collected in the field. Source code, sample datasets, and pretrained models are made publicly available.
intelligent robots and systems | 2016
Katherine A. Skinner; Matthew Johnson-Roberson
Achieving real-time perception is critical to developing a fully autonomous system that can sense, navigate, and interact with its environment. Perception tasks such as online 3D reconstruction and mapping have been intensely studied for terrestrial robotics applications. However, characteristics of the underwater domain such as light attenuation and light scattering violate the brightness constancy constraint, which is an underlying assumption in methods developed for land-based applications. Furthermore, the complex nature of light propagation underwater limits or even prevents subsea use of real-time depth sensors used in state-of-the-art terrestrial mapping techniques. There have been recent advances in the development of plenoptic (also known as light field) cameras, which use an array of micro lenses capturing both intensity and ray direction to enable color and depth measurement from a single passive sensor. This paper presents an end-to-end system to harness these cameras to produce real-time 3D reconstructions underwater. Our system builds upon the state-of-the-art in online terrestrial 3D reconstruction, transferring these approaches to the underwater domain by gathering real-time color and depth (RGB-D) data underwater using a plenoptic camera, and performing dense 3D reconstruction while compensating for attenuation effects of the underwater environment simultaneously, using a graphics processing unit (GPU) to achieve real-time performance. Results are presented for data gathered in a water tank and the proposed technique is validated quantitatively through comparison with a ground truth 3D model gathered in air to demonstrate that the proposed approach can generate accurate 3D models of objects underwater in real-time.
oceans conference | 2015
Katherine A. Skinner; Matthew Johnson-Roberson
This paper proposes a method for automating detection and segmentation of archaeological structures in underwater environments. Underwater archaeologists have recently taken advantage of robotic or diver-operated stereo-vision platforms to survey and map submerged archaeological sites. From the acquired stereo images, 3D reconstruction can be performed to produce high-resolution photo-mosaic maps that are metrically accurate and contain information about depth. Archaeologists can then use these maps to manually outline or sketch features of interest, such as building plans of a submerged city. These features often contain large rocks that serve as the foundation to buildings and are arranged in patterns and geometric shapes that are characteristic of human-made structures. Our proposed method first detects these large rocks based on texture and depth information. Next, we exploit the characteristic geometry of human-made structures to identify foundation rocks arranged along lines to form walls. Then we propose to optimize the outlines of these walls by using the gradient of depth to seek the local minimum of the height from the seafloor to identify the ground plane at the base of the rocks. Finally, we output contours as geo-referenced layers for geographic information system (GIS) and architectural planning software. Experiments are based on a 2010 stereo reconstruction survey of Pavlopetri, a submerged city off the coast of Greece. The results provide a proof-of-concept for automating extraction of archaeological structure in underwater environments to produce geo-referenced contours for further analysis by underwater archaeologists.
international conference on robotics and automation | 2017
Katherine A. Skinner; Eduardo Iscar; Matthew Johnson-Roberson
Mapping of underwater environments is a critical task for a range of activities from monitoring coral reef habitats to surveying submerged archaeological sites. While recent advances in methods for terrestrial mapping can achieve dense 3D reconstructions of scenes in real-time, there remains the challenge of transferring these methods to the underwater domain due to characteristic effects on propagation of light through the water column that violate the brightness constancy constraint used in terrestrial techniques. Current state-of-the-art methods for underwater 3D reconstruction exploit a physical model of light propagation underwater to account for such range-dependent effects as scattering and attenuation; however, these methods necessitate careful calibration of attenuation coefficients required by the physical model, or rely on rough estimates of these coefficients from prior lab experiments. The main contribution of this paper is to develop a novel method to achieve simultaneous estimation of attenuation coefficients for color correction during structure recovery of an underwater scene by integrating this estimation directly into the bundle adjustment step, which performs non-linear optimization. To validate the proposed method, an artificial scene is submerged in a pure water tank and surveyed with a stereo camera platform to simulate an underwater robotic survey in a controlled environment. The target structure is imaged in air with an RGB-D sensor to provide ground truth structure and color, and a color calibration board is place in the scene for further reference. Results show that the proposed method can automatically estimate a water-column aware model for color correction of underwater images simultaneously to 3D reconstruction of the submerged scene.
oceans conference | 2015
Vittorio Bichucher; Jeffrey M. Walls; Paul Ozog; Katherine A. Skinner; Ryan M. Eustice
This paper reports on a factor graph simultaneous localization and mapping framework for autonomous underwater vehicle localization based on terrain-aided navigation. The method requires no prior bathymetric map and only assumes that the autonomous underwater vehicle has the ability to sparsely sense the local water column depth, such as with a bottom-looking Doppler velocity log. Since dead-reckoned navigation is accurate in short time windows, the vehicle accumulates several water column depth point clouds- or submaps-during the course of its survey. We propose an xy-alignment procedure between these submaps in order to enforce consistent bathymetric structure over time, and therefore attempt to bound long-term navigation drift. We evaluate the submap alignment method in simulation and present performance results from multiple autonomous underwater vehicle field trials.
computer vision and pattern recognition | 2017
Katherine A. Skinner; Matthew Johnson-Roberson
Underwater vision is subject to effects of underwater light propagation that act to absorb, scatter, and attenuate light rays between the scene and the imaging platform. Backscattering has been shown to have a strong impact on underwater images. As light interacts with particulate matter in the water column, it is scattered back towards the imaging sensor, resulting in a hazy effect across the image. A similar effect occurs in terrestrial applications in images of foggy scenes due to interaction with the atmosphere. Prior work on multi-image dehazing has relied on multiple cameras, polarization filters, or moving light sources. Single image dehazing is an ill-posed problem; proposed solutions rely on strong priors of the scene. This paper presents a novel method for underwater image dehazing using a light field camera to capture both the spatial and angular distribution of light across a scene. First, a 2D dehazing method is applied to each sub-aperture image. These dehazed images are then combined to produce a smoothed central view. Lastly, the smoothed central view is used as a reference to perform guided image filtering, resulting in a 4D dehazed underwater light field image. The developed method is validated on real light field data collected in a controlled in-lab water tank, with images taken in air for reference. This dataset is made publicly available.
Cancer Immunology, Immunotherapy | 2016
David D. Stenehjem; Michael Toole; Joseph Merriman; Kinjal Parikh; Stephanie Daignault; Sarah Scarlett; Peg Esper; Katherine A. Skinner; Aaron M. Udager; Srinivas K. Tantravahi; David Michael Gill; Alli M. Straubhar; Archana M. Agarwal; Kenneth F. Grossmann; Wolfram E. Samlowski; Bruce G. Redman; Neeraj Agarwal; Ajjai Alva
arXiv: Computer Vision and Pattern Recognition | 2018
Alexandra Carlson; Katherine A. Skinner; Ram Vasudevan; Matthew Johnson-Roberson
arXiv: Computer Vision and Pattern Recognition | 2018
Junming Zhang; Katherine A. Skinner; Ram Vasudevan; Matthew Johnson-Roberson
arXiv: Computer Vision and Pattern Recognition | 2018
Alexandra Carlson; Katherine A. Skinner; Ram Vasudevan; Matthew Johnson-Roberson