Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karol Hausman is active.

Publication


Featured researches published by Karol Hausman.


international conference on robotics and automation | 2013

Tracking-based interactive segmentation of textureless objects

Karol Hausman; Ferenc Balint-Benczedi; Dejan Pangercic; Zoltan-Csaba Marton; Ryohei Ueda; Kei Okada; Michael Beetz

This paper describes a textureless object segmentation approach for autonomous service robots acting in human living environments. The proposed system allows a robot to effectively segment textureless objects in cluttered scenes by leveraging its manipulation capabilities. In our pipeline, the cluttered scenes are first statically segmented using state-of-the-art classification algorithm and then the interactive segmentation is deployed in order to resolve this possibly ambiguous static segmentation. In the second step the RGBD (RGB + Depth) sparse features, estimated on the RGBD point cloud from the Kinect sensor, are extracted and tracked while motion is induced into a scene. Using the resulting feature poses, the features are then assigned to their corresponding objects by means of a graph-based clustering algorithm. In the final step, we reconstruct the dense models of the objects from the previously clustered sparse RGBD features. We evaluated the approach on a set of scenes which consist of various textureless flat (e.g. box-like) and round (e.g. cylinder-like) objects and the combinations thereof.


IEEE Transactions on Robotics | 2017

Interactive Perception: Leveraging Action in Perception and Perception in Action

Jeannette Bohg; Karol Hausman; Bharath Sankaran; Oliver Brock; Danica Kragic; Stefan Schaal; Gaurav S. Sukhatme

Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research.


ieee-ras international conference on humanoid robots | 2015

Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor

Zhe Su; Karol Hausman; Yevgen Chebotar; Artem Molchanov; Gerald E. Loeb; Gaurav S. Sukhatme; Stefan Schaal

We introduce and evaluate contact-based techniques to estimate tactile properties and detect manipulation events using a biomimetic tactile sensor. In particular, we estimate finger forces, and detect and classify slip events. In addition, we present a grip force controller that uses the estimation results to gently pick up objects of various weights and texture. The estimation techniques and the grip controller are experimentally evaluated on a robotic system consisting of Barrett arms and hands. Our results indicate that we are able to accurately estimate forces acting in all directions, detect the incipient slip, and classify slip with over 80% success rate.


The International Journal of Robotics Research | 2015

Cooperative multi-robot control for target tracking with onboard sensing1

Karol Hausman; rg Müller; Abishek Hariharan; Nora Ayanian; Gaurav S. Sukhatme

We consider the cooperative control of a team of robots to estimate the position of a moving target using onboard sensing. In this setting, robots are required to estimate their positions using relative onboard sensing while concurrently tracking the target. Our probabilistic localization and control method takes into account the motion and sensing capabilities of the individual robots to minimize the expected future uncertainty of the target position. Two measures of uncertainty are extensively evaluated and compared: mutual information and the trace of the extended Kalman filter covariance. Our approach reasons about multiple possible sensing topologies and incorporates an efficient topology switching technique to generate locally optimal controls in polynomial time complexity. Simulations illustrate the performance of our approach and prove its flexibility in finding suitable sensing topologies depending on the limited sensing capabilities of the robots and the movements of the target. Furthermore, we demonstrate the applicability of our method in various experiments with single and multiple quadrotor robots tracking a ground vehicle in an indoor environment.


international conference on robotics and automation | 2015

Active articulation model estimation through interactive perception

Karol Hausman; Scott Niekum; Sarah Osentoski; Gaurav S. Sukhatme

We introduce a particle filter-based approach to representing and actively reducing uncertainty over articulated motion models. The presented method provides a probabilistic model that integrates visual observations with feedback from manipulation actions to best characterize a distribution of possible articulation models. We evaluate several action selection methods to efficiently reduce the uncertainty about the articulation model. The full system is experimentally evaluated using a PR2 mobile manipulator. Our experiments demonstrate that the proposed system allows for intelligent reasoning about sparse, noisy data in a number of common manipulation scenarios.


international symposium on experimental robotics | 2016

Cooperative Control for Target Tracking with Onboard Sensing

Karol Hausman; Jörg Müller; Abishek Hariharan; Nora Ayanian; Gaurav S. Sukhatme

We consider the cooperative control of a team of robots to estimate the position of a moving target using onboard sensing. In particular, we do not assume that the robot positions are known, but estimate their positions using relative onboard sensing. Our probabilistic localization and control method takes into account the motion and sensing capabilities of the individual robots to minimize the expected future uncertainty of the target position. It reasons about multiple possible sensing topologies and incorporates an efficient topology switching technique to generate locally optimal controls in polynomial time complexity. Simulations show the performance of our approach and prove its flexibility to find suitable sensing topologies depending on the limited sensing capabilities of the robots and the movements of the target. Furthermore, we demonstrate the applicability of our method in various experiments with single and multiple quadrotor robots tracking a ground vehicle in an indoor environment.


international conference on robotics and automation | 2016

Self-calibrating multi-sensor fusion with probabilistic measurement validation for seamless sensor switching on a UAV

Karol Hausman; Stephan Weiss; Roland Brockers; Larry H. Matthies; Gaurav S. Sukhatme

Fusing data from multiple sensors on-board a mobile platform can significantly augment its state estimation abilities and enable autonomous traversals of different domains by adapting to changing signal availabilities. However, due to the need for accurate calibration and initialization of the sensor ensemble as well as coping with erroneous measurements that are acquired at different rates with various delays, multi-sensor fusion still remains a challenge. In this paper, we introduce a novel multi-sensor fusion approach for agile aerial vehicles that allows for measurement validation and seamless switching between sensors based on statistical signal quality analysis. Moreover, it is capable of self-initialization of its extrinsic sensor states. These initialized states are maintained in the framework such that the system can continuously self-calibrate. We implement this framework on-board a small aerial vehicle and demonstrate the effectiveness of the above capabilities on real data. As an example, we fuse GPS data, ultra-wideband (UWB) range measurements, visual pose estimates, and IMU data. Our experiments demonstrate that our system is able to seamlessly filter and switch between different sensors modalities during run time.


international conference on robotics and automation | 2017

Observability-Aware Trajectory Optimization for Self-Calibration With Application to UAVs

Karol Hausman; James A. Preiss; Gaurav S. Sukhatme; Stephan Weiss

We study the nonlinear observability of a systems states in view of how well they are observable and what control inputs would improve the convergence of their estimates. We use these insights to develop an observability-aware trajectory-optimization framework for nonlinear systems that produces trajectories well suited for self-calibration. Our method reasons about the quality of observability while respecting system dynamics and motion constraints to yield the optimal trajectory for rapid convergence of the self-calibration states (or other user-chosen states). Self-calibration trials with a real and a simulated quadrotor provide compelling evidence that the proposed method is both faster and more accurate compared to other state-of-the-art approaches.


international symposium on experimental robotics | 2016

Generalizing Regrasping with Supervised Policy Learning

Yevgen Chebotar; Karol Hausman; Oliver Kroemer; Gaurav S. Sukhatme; Stefan Schaal

We present a method for learning a general regrasping behavior by using supervised policy learning. First, we use reinforcement learning to learn linear regrasping policies, with a small number of parameters, for single objects. Next, a general high-dimensional regrasping policy is learned in a supervised manner by using the outputs of the individual policies. In our experiments with multiple objects, we show that learning low-dimensional policies makes the reinforcement learning feasible with a small amount of data. Our experiments indicate that the general high-dimensional policy learned using our method is able to outperform the respective linear policies on each of the single objects that they were trained on. Moreover, the general policy is able to generalize to a novel object that was not present during training.


robotics science and systems | 2017

Trajectory Optimization for Self-Calibration and Navigation

James A. Preiss; Karol Hausman; Gaurav S. Sukhatme; Stephan Weiss

Trajectory generation approaches for mobile robots generally aim to optimize with respect to a cost function such as energy, execution time, or other mission-relevant parameters within the constraints of vehicle dynamics and obstacles in the environment. We propose to add the cost of state observability to the trajectory optimization in order to ensure fast and accurate state estimation throughout the mission while still respecting the constraints of vehicle dynamics and the environment. Our approach finds a dynamically feasible estimation-optimized trajectory in a sequence of connected convex polytopes representing free space in the environment. In addition, we show a statistical procedure that enables observability-aware trajectory optimization for heterogeneous states in the system both in magnitude and units, which was not supported in previous formulations. We validate our approach with extensive simulations of a visualinertial state estimator on an aerial platform as a specific realization of our general method. We show that the optimized trajectories lead to more accurate navigation while eliminating the need for a separate calibration procedure.

Collaboration


Dive into the Karol Hausman's collaboration.

Top Co-Authors

Avatar

Gaurav S. Sukhatme

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yevgen Chebotar

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Artem Molchanov

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Zhe Su

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Eric Heiden

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Gerald E. Loeb

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Joseph J. Lim

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge