Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary R. Bradski is active.

Publication


Featured researches published by Gary R. Bradski.


international conference on computer vision | 2011

ORB: An efficient alternative to SIFT or SURF

Ethan Rublee; Vincent Rabaud; Kurt Konolige; Gary R. Bradski

Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.


intelligent robots and systems | 2010

Fast 3D recognition and pose using the Viewpoint Feature Histogram

Radu Bogdan Rusu; Gary R. Bradski; Romain Thibaux; John M. Hsu

We present the Viewpoint Feature Histogram (VFH), a descriptor for 3D point cloud data that encodes geometry and viewpoint. We demonstrate experimentally on a set of 60 objects captured with stereo cameras that VFH can be used as a distinctive signature, allowing simultaneous recognition of the object and its pose. The pose is accurate enough for robot manipulation, and the computational cost is low enough for real time operation. VFH was designed to be robust to large surface noise and missing depth information in order to work reliably on stereo data.


asian conference on computer vision | 2012

Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes

Stefan Hinterstoisser; Vincent Lepetit; Slobodan Ilic; Stefan Johannes Josef Holzer; Gary R. Bradski; Kurt Konolige; Nassir Navab

We propose a framework for automatic modeling, detection, and tracking of 3D objects with a Kinect. The detection part is mainly based on the recent template-based LINEMOD approach [1] for object detection. We show how to build the templates automatically from 3D models, and how to estimate the 6 degrees-of-freedom pose accurately and in real-time. The pose estimation and the color information allow us to check the detection hypotheses and improves the correct detection rate by 13% with respect to the original LINEMOD. These many improvements make our framework suitable for object manipulation in Robotics applications. Moreover we propose a new dataset made of 15 registered, 1100+ frame video sequences of 15 various objects for the evaluation of future competing methods.


international conference on computer vision | 2011

CAD-model recognition and 6DOF pose estimation using 3D cues

Aitor Aldoma; Markus Vincze; Nico Blodow; David Gossow; Suat Gedikli; Radu Bogdan Rusu; Gary R. Bradski

This paper focuses on developing a fast and accurate 3D feature for use in object recognition and pose estimation for rigid objects. More specifically, given a set of CAD models of different objects representing our knoweledge of the world - obtained using high-precission scanners that deliver accurate and noiseless data - our goal is to identify and estimate their pose in a real scene obtained by a depth sensor like the Microsoft Kinect. Borrowing ideas from the Viewpoint Feature Histogram (VFH) due to its computational efficiency and recognition performance, we describe the Clustered Viewpoint Feature Histogram (CVFH) and the cameras roll histogram together with our recognition framework to show that it can be effectively used to recognize objects and 6DOF pose in real environments dealing with partial occlusion, noise and different sensors atributes for training and recognition data. We show that CVFH out-performs VFH and present recognition results using the Microsoft Kinect Sensor on an object set of 44 objects.


international conference on robotics and automation | 2010

Autonomous door opening and plugging in with a personal robot

Wim Meeussen; Melonee Wise; Stuart Glaser; Sachin Chitta; Conor McGann; Patrick Mihelich; Eitan Marder-Eppstein; Marius Muja; Victor Eruhimov; Tully Foote; John M. Hsu; Radu Bogdan Rusu; Bhaskara Marthi; Gary R. Bradski; Kurt Konolige; Brian P. Gerkey; Eric Berger

We describe an autonomous robotic system capable of navigating through an office environment, opening doors along the way, and plugging itself into electrical outlets to recharge as needed. We demonstrate through extensive experimentation that our robot executes these tasks reliably, without requiring any modification to the environment. We present robust detection algorithms for doors, door handles, and electrical plugs and sockets, combining vision and laser sensors. We show how to overcome the unavoidable shortcoming of perception by integrating compliant control into manipulation motions. We present a visual-differencing approach to high-precision plug-insertion that avoids the need for high-precision hand-eye calibration.


european conference on computer vision | 2010

Depth-encoded hough voting for joint object detection and shape recovery

Min Sun; Gary R. Bradski; Bing-Xin Xu; Silvio Savarese

Detecting objects, estimating their pose and recovering 3D shape information are critical problems in many vision and robotics applications. This paper addresses the above needs by proposing a new method called DEHV - Depth-Encoded Hough Voting detection scheme. Inspired by the Hough voting scheme introduced in [13], DEHV incorporates depth information into the process of learning distributions of image features (patches) representing an object category. DEHV takes advantage of the interplay between the scale of each object patch in the image and its distance (depth) from the corresponding physical patch attached to the 3D object. DEHV jointly detects objects, infers their categories, estimates their pose, and infers/decodes objects depth maps from either a single image (when no depth maps are available in testing) or a single image augmented with depth map (when this is available in testing). Extensive quantitative and qualitative experimental analysis on existing datasets [6,9,22] and a newly proposed 3D table-top object category dataset shows that our DEHV scheme obtains competitive detection and pose estimation results as well as convincing 3D shape reconstruction from just one single uncalibrated image. Finally, we demonstrate that our technique can be successfully employed as a key building block in two application scenarios (highly accurate 6 degrees of freedom (6 DOF) pose estimation and 3D object modeling).


international conference on computer vision | 2009

Detecting and segmenting objects for mobile manipulation

Radu Bogdan Rusu; Andreas Holzbach; Michael Beetz; Gary R. Bradski

This paper proposes a novel 3D scene interpretation approach for robots in mobile manipulation scenarios using a set of 3D point features (Fast Point Feature Histograms) and probabilistic graphical methods (Conditional Random Fields). Our system uses real time stereo with textured light to obtain dense depth maps in the robots manipulators working space. For the purposes of manipulation, we want to interpret the planar supporting surfaces of the scene, recognize and segment the object classes into their primitive parts in 6 degrees of freedom (6DOF) so that the robot knows what it is attempting to use and where it may be handled. The scene interpretation algorithm uses a two-layer classification scheme: i) we estimate Fast Point Feature Histograms (FPFH) as local 3D point features to segment the objects of interest into geometric primitives; and ii) we learn and categorize object classes using a novel Global Fast Point Feature Histogram (GFPFH) scheme which uses the previously estimated primitives at each point. To show the validity of our approach, we analyze the proposed system for the problem of recognizing the object class of 20 objects in 500 table settings scenarios. Our algorithm identifies the planar surfaces, decomposes the scene and objects into geometric primitives with 98.27% accuracy and uses the geometric primitives to identify the objects class with an accuracy of 96.69%.


international conference on robotics and automation | 2011

REIN - A fast, robust, scalable REcognition INfrastructure

Marius Muja; Radu Bogdan Rusu; Gary R. Bradski; David G. Lowe

A robust robot perception system intended to enable object manipulation needs to be able to accurately identify objects and their pose at high speeds. Since objects vary considerably in surface properties, rigidity and articulation, no single detector or object estimation method has been shown to provide reliable detection across object types to date. This indicates the need for an architecture that is able to quickly swap detectors, pose estimators, and filters, or to run them in parallel or serial and combine their results, preferably without any code modifications at all. In this paper, we present our implementation of such an infrastructure, ReIn (REcognition INfrastructure), to answer these needs. ReIn is able to combine a multitude of 2D/3D object recognition and pose estimation techniques in parallel as dynamically loadable plugins. It also provides an extremely efficient data passing architecture, and offers the possibility to change the parameters and initial settings of these techniques during their execution. In the course of this work we introduce two new classifiers designed for robot perception needs: BiGGPy (Binarized Gradient Grid Pyramids) for scalable 2D classification and VFH (Viewpoint Feature Histograms) for 3D classification and pose. We then show how these two classifiers can be easily combined using ReIn to solve object recognition and pose identification problems.


ieee-ras international conference on humanoid robots | 2009

Perception for mobile manipulation and grasping using active stereo

Radu Bogdan Rusu; Andreas Holzbach; Rosen Diankov; Gary R. Bradski; Michael Beetz

In this paper we present a comprehensive perception system with applications to mobile manipulation and grasping for personal robotics. Our approach makes use of dense 3D point cloud data acquired using stereo vision cameras by projecting textured light onto the scene. To create models suitable for grasping, we extract the supporting planes and model object clusters with different surface geometric primitives. The resultant decoupled primitive point clusters are then reconstructed as smooth triangular mesh surfaces, and their use is validated in grasping experiments using OpenRAVE [1]. To annotate the point cloud data with primitive geometric labels we make use of our previously proposed Fast Point Feature Histograms [2] and probabilistic graphical methods (Conditional Random Fields), and obtain a classification accuracy of 98.27% for different object geometries. We show the validity of our approach by analyzing the proposed system for the problem of building object models usable in grasping applications with the PR2 robot (see Figure 1).


international conference on computer vision | 2012

Technical demonstration on model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes

Stefan Hinterstoisser; Vincent Lepetit; Slobodan Ilic; Stefan M. Holzer; Kurt Konolige; Gary R. Bradski; Nassir Navab

In this technical demonstration, we will show our framework of automatic modeling, detection, and tracking of arbitrary texture-less 3D objects with a Kinect. The detection is mainly based on the recent template-based LINEMOD approach [1] while the automatic template learning from reconstructed 3D models, the fast pose estimation and the quick and robust false positive removal is a novel addition. n nIn this demonstration, we will show each step of our pipeline, starting with the fast reconstruction of arbitrary 3D objects, followed by the automatic learning and the robust detection and pose estimation of the reconstructed objects in real-time. As we will show, this makes our framework suitable for object manipulation e.g. in robotics applications.

Collaboration


Dive into the Gary R. Bradski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Min Sun

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vincent Lepetit

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge