Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stan Birchfield is active.

Publication


Featured researches published by Stan Birchfield.


international conference on computer vision | 2009

Adaptive fragments-based tracking of non-rigid objects using level sets

Prakash Chockalingam; S. Nalin Pradeep; Stan Birchfield

We present an approach to visual tracking based on dividing a target into multiple regions, or fragments. The target is represented by a Gaussian mixture model in a joint feature-spatial space, with each ellipsoid corresponding to a different fragment. The fragments are automatically adapted to the image data, being selected by an efficient region-growing procedure and updated according to a weighted average of the past and present image statistics. Modeling of target and background are performed in a Chan-Vese manner, using the framework of level sets to preserve accurate boundaries of the target. The extracted target boundaries are used to learn the dynamic shape of the target over time, enabling tracking to continue under total occlusion. Experimental results on a number of challenging sequences demonstrate the effectiveness of the technique.


international conference on robotics and automation | 2011

Classification of clothing using interactive perception

Bryan Willimon; Stan Birchfield; Ian D. Walker

We present a system for automatically extracting and classifying items in a pile of laundry. Using only visual sensors, the robot identifies and extracts items sequentially from the pile. When an item has been removed and isolated, a model is captured of the shape and appearance of the object, which is then compared against a database of known items. The classification procedure relies upon silhouettes, edges, and other low-level image measurements of the articles of clothing. The contributions of this paper are a novel method for extracting articles of clothing from a pile of laundry and a novel method of classifying clothing using interactive perception. Experiments demonstrate the ability of the system to efficiently classify and label into one of six categories (pants, shorts, short-sleeve shirt, long-sleeve shirt, socks, or underwear). These results show that, on average, classification rates using robot interaction are 59% higher than those that do not use interaction.


international conference on biometrics theory applications and systems | 2008

Adapting Starburst for Elliptical Iris Segmentation

Wayne J. Ryan; Damon L. Woodard; Andrew T. Duchowski; Stan Birchfield

Fitting an ellipse to the iris boundaries accounts for the projective distortions present in off-axis images of the eye and provides the contour fitting necessary for the dimensionless mapping used in leading iris recognition algorithms. Previous iris segmentation efforts have either focused on fitting circles to pupillary and limbic boundaries or assigning labels to image pixels. This paper approaches the iris segmentation problem by adapting the starburst algorithm to locate pupillary and limbic feature pixels used to fit a pair of ellipses. The approach is evaluated by comparing the fits to ground truth. Two metrics are used in the evaluation, the first based on the algebraic distance between ellipses, the second based on ellipse chamfer images. Results are compared to segmentations produced by ND_IRIS over randomly selected images from the iris challenge evaluation database. Statistical evidence shows significant improvement of starbursts elliptical fits over the circular fits on which ND_IRIS relies.


intelligent robots and systems | 2011

Model for unfolding laundry using interactive perception

Bryan Willimon; Stan Birchfield; Ian D. Walker

We present an algorithm for automatically unfolding a piece of clothing. A piece of laundry is pulled in different directions at various points of the cloth in order to flatten the laundry. The features of the cloth are extracted and calculated to determine a valid location and orientation in which to interact with it. The features include the peak region, corner locations, and continuity / discontinuity of the cloth. In this paper we present a two-stage algorithm, introducing a novel solution to the unfolding / flattening problem using interactive perception. Simulations using 3D simulation software, and experiments with robot hardware demonstrate the ability of the algorithm to flatten pieces of laundry using different starting configurations. These results show that, at most, the algorithm flattens out a piece of cloth from 11.1% to 95.6% of the canonical configuration.


international conference on robotics and automation | 2013

A new approach to clothing classification using mid-level layers

Bryan Willimon; Ian D. Walker; Stan Birchfield

We present a novel approach for classifying items from a pile of laundry. The classification procedure exploits color, texture, shape, and edge information from 2D and 3D local and global information for each article of clothing using a Kinect sensor. The key contribution of this paper is a novel method of classifying clothing which we term L-M-H, more specifically L-C-S-H using characteristics and selection masks. Essentially, the method decomposes the problem into high (H), low (L) and multiple mid-level (characteristics(C), selection masks(S)) layers and produces “local” solutions to solve the global classification problem. Experiments demonstrate the ability of the system to efficiently classify and label into one of three categories (shirts, socks, or dresses). These results show that, on average, the classification rates, using this new approach with mid-level layers, achieve a true positive rate of 90%.


international conference on robotics and automation | 2012

Occlusion-aware reconstruction and manipulation of 3D articulated objects

Xiaoxia Huang; Ian D. Walker; Stan Birchfield

We present a method to recover complete 3D models of articulated objects. Structure-from-motion techniques are used to capture 3D point cloud models of the object in two different configurations. A novel combination of Procrustes analysis and RANSAC facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. With the resulting articulated model, a robotic system is able to manipulate the object along its joint axes at a specified grasp point in order to exercise its degrees of freedom. Because the models capture all sides of the object, they are occluded-aware, enabling the robotic system to plan paths to parts of the object that are not visible in the current view. Our algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of objects with both revolute and prismatic joints.


Image and Vision Computing | 2010

Iris segmentation in non-ideal images using graph cuts

Shrinivas J. Pundlik; Damon L. Woodard; Stan Birchfield

A non-ideal iris image segmentation approach based on graph cuts is presented that uses both the appearance and eye geometry information. A texture measure based on gradients is computed to discriminate between eyelash and non-eyelash regions, combined with image intensity differences between the iris, pupil, and the background (region surrounding the iris) are utilized as cues for segmentation. The texture and intensity distributions for the various regions are learned from histogramming and explicit sampling of the pixels estimated to belong to the corresponding regions. The image is modeled as a Markov Random Field and the energy minimization is achieved via graph cuts to assign each image pixel one of the four possible labels: iris, pupil, background, and eyelash. Furthermore, the iris region is modeled as an ellipse, and the best fitting ellipse to the initial pixel based iris segmentation is computed to further refine the segmented region. As a result, the iris region mask and the parameterized iris shape form the outputs of the proposed approach that allow subsequent iris recognition steps to be performed for the segmented irises. The algorithm is unsupervised and can deal with non-ideality in the iris images due to out-of-plane rotation of the eye, iris occlusion by the eyelids and the eyelashes, multi-modal iris grayscale intensity distribution, and various illumination effects. The proposed segmentation approach is tested on several publicly available non-ideal near infra red (NIR) iris image databases. We compare both the segmentation error and the resulting recognition error with several leading techniques, demonstrating significantly improved results with the proposed technique.


2013 IEEE Workshop on Robot Vision (WORV) | 2013

Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor

Brian Peasley; Stan Birchfield

This paper proposes a novel approach to obstacle detection and avoidance using a 3D sensor. We depart from the approach of previous researchers who use depth images from 3D sensors projected onto UV-disparity to detect obstacles. Instead, our approach relies on projecting 3D points onto the ground plane, which is estimated during a calibration step. A 2D occupancy map is then used to determine the presence of obstacles, from which translation and rotation velocities are computed to avoid the obstacles. Two innovations are introduced to overcome the limitations of the sensor: An infinite pole approach is proposed to hypothesize infinitely tall, thin obstacles when the sensor yields invalid readings, and a control strategy is adopted to turn the robot away from scenes that yield a high percentage of invalid readings. Together, these extensions enable the system to overcome the inherent limitations of the sensor. Experiments in a variety of environments, including dynamic objects, obstacles of varying heights, and dimly-lit conditions, show the ability of the system to perform robust obstacle avoidance in real time under realistic indoor conditions.


international conference on robotics and automation | 2013

3D non-rigid deformable surface estimation without feature correspondence

Bryan Willimon; Ian D. Walker; Stan Birchfield

We propose an algorithm, that extends our previous work, to estimate the current configuration of a non-rigid object using energy minimization and graph cuts. Our approach removes the need for feature correspondence or texture information and extends the boundary energy term. The object segmentation process is improved by using graph cuts along with a skin detector. We introduce an automatic mesh generator that provides a triangular mesh encapsulating the entire non-rigid object without predefined values. Our approach also handles in-plane rotation by reinitializing the mesh after data has been lost in the image sequence. Results display the proposed algorithm over a dataset consisting of seven shirts, two pairs of shorts, two posters, and a pair of pants.


american control conference | 2011

Robot crowd navigation using predictive position fields in the potential function framework

Ninad Pradhan; Timothy C. Burg; Stan Birchfield

A potential function based path planner for a mobile robot to autonomously navigate an area crowded with people is proposed. Path planners based on potential functions have been essentially static, with very limited representation of the motion of obstacles as part of their navigation model. The static formulations do not take into account the possibility of using predicted workspace configuration to augment the performance of the path planner. The use of an elliptical region signifying the predicted position and direction of motion of an obstacle is proposed in this paper. The repulsive potential caused by an obstacle is defined relative to this elliptical field. An analytic switch is made when the robot enters this predicted elliptical zone of the obstacle. The development of navigation functions makes it possible to design a potential-based planner which is guaranteed to converge to the target.

Collaboration


Dive into the Stan Birchfield's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge