Michael Krainin
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Krainin.
The International Journal of Robotics Research | 2012
Peter Henry; Michael Krainin; Evan Herbst; Xiaofeng Ren; Dieter Fox
RGB-D cameras (such as the Microsoft Kinect) are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cameras can be used for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manipulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop-closure detection, followed by pose optimization to achieve globally consistent maps. We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape information available from RGB-D cameras.
international symposium on experimental robotics | 2014
Peter Henry; Michael Krainin; Evan Herbst; Xiaofeng Ren; Dieter Fox
RGB-D cameras are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cameras can be used in the context of robotics, specifically for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manipulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop closure detection, followed by pose optimization to achieve globally consistent maps.We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape information available from RGB-D cameras.
international symposium on robotics | 2017
Albert S. Huang; Abraham Bachrach; Peter Henry; Michael Krainin; Daniel Maturana; Dieter Fox; Nicholas Roy
RGB-D cameras provide both a color image and per-pixel depth estimates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on unreliable wireless links. We evaluate the effectiveness of our system for stabilizing and controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.
The International Journal of Robotics Research | 2011
Michael Krainin; Peter Henry; Xiaofeng Ren; Dieter Fox
Recognizing and manipulating objects is an important task for mobile robots performing useful services in everyday environments. While existing techniques for object recognition related to manipulation provide very good results even for noisy and incomplete data, they are typically trained using data generated in an offline process. As a result, they do not enable a robot to acquire new object models as it operates in an environment. In this paper we develop an approach to building 3D models of unknown objects based on a depth camera observing the robot’s hand while moving an object. The approach integrates both shape and appearance information into an articulated Iterative Closest Point approach to track the robot’s manipulator and the object. Objects are modeled by sets of surfels, which are small patches providing occlusion and appearance information. Experiments show that our approach provides very good 3D models even when the object is highly symmetric and lacks visual features and the manipulator motion is noisy. Autonomous object modeling represents a step toward improved semantic understanding, which will eventually enable robots to reason about their environments in terms of objects and their relations rather than through raw sensor data.
The International Journal of Robotics Research | 2012
Abraham Bachrach; Sam Prentice; Ruijie He; Peter Henry; Albert S. Huang; Michael Krainin; Daniel Maturana; Dieter Fox; Nicholas Roy
RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.
international conference on robotics and automation | 2011
Michael Krainin; Brian Curless; Dieter Fox
Recognizing and manipulating objects is an important task for mobile robots performing useful services in everyday environments. In this paper, we develop a system that enables a robot to grasp an object and to move it in front of its depth camera so as to build a 3D surface model of the object. We derive an information gain based variant of the next best view algorithm in order to determine how the manipulator should move the object in front of the camera. By considering occlusions caused by the robot manipulator, our technique also determines when and how the robot should re-grasp the object in order to build a complete model.
ieee wic acm international conference on intelligent agent technology | 2007
Michael Krainin; Bo An; Victor R. Lesser
Through automated negotiation we aim to improve task allocation in a distributed sensor network. In particular, we look at a type of adaptive weather-sensing radar that permits the radar to focus its scanning on certain regions of the atmosphere. Current control systems can only computationally handle the decision making for a small number of radars because of the complexity of the process. One solution is to partition the radars into smaller, independent sets. Redundant scanning of tasks and loss of cooperative scanning capabilities can occur as a result. With negotiation we can reduce these occurrences, helping to ensure that the correct radars scan tasks based on the overall social welfare. We develop a distributed negotiation model where on each cycle the overall system utility improves or remains constant. Experimental results show that as compared to the centralized task allocation mechanism, the proposed distributed task allocation mechanism achieves almost the same level of social welfare but with a significantly reduced computational load.
web intelligence | 2011
Yoonheui Kim; Michael Krainin; Victor R. Lesser
Solving a coordination problem in a decentralized environment requires a large amount of resources and thus exploiting the innate system structure and external information as much as possible is necessary for such a problem to be solved in a computationally effective manner. This work proposes new techniques for saving communication and computational resources when solving distributed constraint optimization problems using the Max-Sum algorithm in an environment where system hardware resources are clustered. These techniques facilitate effective problem solving through the use of a pre-computed policy and two phase propagation on Max-Sum algorithm, one inside the clustered resources and one among clustered resources. This approach shows equivalent quality to the standard Max-Sum algorithm while reducing communication requirements on average by50\% and computation resources by 5 to 30\% depending on the specific problem instance. These experiments were performed in a realistic setting involving the scheduling of a network of as many as 192 radars in 48 clusters.
international conference on robotics and automation | 2012
Michael Krainin; Kurt Konolige; Dieter Fox
While Iterative Closest Point (ICP) algorithms have been successful at aligning 3D point clouds, they do not take into account constraints arising from sensor viewpoints. More recent beam-based models take into account sensor noise and viewpoint, but problems still remain. In particular, good optimization strategies are still lacking for the beam-based model. In situations of occlusion and clutter, both beam-based and ICP approaches can fail to find good solutions. In this paper, we present both an optimization method for beambased models and a novel framework for modeling observation dependencies in beam-based models using over-segmentations. This technique enables reasoning about object extents and works well in heavy clutter. We also make available a ground-truth 3D dataset for testing algorithms in this area.
adaptive agents and multi agents systems | 2011
Yoonheui Kim; Michael Krainin; Victor R. Lesser