Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stanley T. Birchfield is active.

Publication


Featured researches published by Stanley T. Birchfield.


computer vision and pattern recognition | 2005

Spatiograms versus histograms for region-based tracking

Stanley T. Birchfield; Sriram Rangarajan

We introduce the concept of a spatiogram, which is a generalization of a histogram that includes potentially higher order moments. A histogram is a zeroth-order spatiogram, while second-order spatiograms contain spatial means and covariances for each histogram bin. This spatial information still allows quite general transformations, as in a histogram, but captures a richer description of the target to increase robustness in tracking. We show how to use spatiograms in kernel-based trackers, deriving a mean shift procedure in which individual pixels vote not only for the amount of shift but also for its direction. Experiments show improved tracking results compared with histograms, using both mean shift and exhaustive local search.


IEEE Transactions on Intelligent Transportation Systems | 2008

Real-Time Incremental Segmentation and Tracking of Vehicles at Low Camera Angles Using Stable Features

Stanley T. Birchfield

We present a method for segmenting and tracking vehicles on highways using a camera that is relatively low to the ground. At such low angles, 3-D perspective effects cause significant changes in appearance over time, as well as severe occlusions by vehicles in neighboring lanes. Traditional approaches to occlusion reasoning assume that the vehicles initially appear well separated in the image; however, in our sequences, it is not uncommon for vehicles to enter the scene partially occluded and remain so throughout. By utilizing a 3-D perspective mapping from the scene to the image, along with a plumb line projection, we are able to distinguish a subset of features whose 3-D coordinates can be accurately estimated. These features are then grouped to yield the number and locations of the vehicles, and standard feature tracking is used to maintain the locations of the vehicles over time. Additional features are then assigned to these groups and used to classify vehicles as cars or trucks. Our technique uses a single grayscale camera beside the road, incrementally processes image frames, works in real time, and produces vehicle counts with over 90% accuracy on challenging sequences.


computer vision and pattern recognition | 2005

Vehicle segmentation and tracking from a low-angle off-axis camera

Shrinivas J. Pundlik; Stanley T. Birchfield

We present a novel method for visually monitoring a highway when the camera is relatively low to the ground and on the side of the road. In such a case, occlusion and the perspective effects due to the heights of the vehicles cannot be ignored. Features are detected and tracked throughout the image sequence, and then grouped together using a multilevel homography, which is an extension of the standard homography to the low-angle situation. We derive a concept called the relative height constraint that makes it possible to estimate the 3D height of feature points on the vehicles from a single camera, a key part of the technique. Experimental results on several different highways demonstrate the systems ability to successfully segment and track vehicles at low angles, even in the presence of severe occlusion and significant perspective changes.


international conference on robotics and automation | 2006

Qualitative vision-based mobile robot navigation

Zhichao Chen; Stanley T. Birchfield

We present a novel, simple algorithm for mobile robot navigation. Using a teach-replay approach, the robot is manually led along a desired path in a teaching phase, then the robot autonomously follows that path in a replay phase. The technique requires a single off-the-shelf, forward-looking camera with no calibration (including no calibration for lens distortion). Feature points are automatically detected and tracked throughout the image sequence, and the feature coordinates in the replay phase are compared with those computed previously in the teaching phase to determine the turning commands for the robot. The algorithm is entirely qualitative in nature, requiring no map of the environment, no image Jacobian, no homography, no fundamental matrix, and no assumption about a flat ground plane. Experimental results demonstrate the capability of autonomous navigation in both indoor and outdoor environments, on both flat and slanted surfaces, with dynamic occluding objects, for distances over 100 m


IEEE Transactions on Robotics | 2009

Qualitative Vision-Based Path Following

Zhichao Chen; Stanley T. Birchfield

We present a simple approach for vision-based path following for a mobile robot. Based upon a novel concept called the funnel lane, the coordinates of feature points during the replay phase are compared with those obtained during the teaching phase in order to determine the turning direction. Increased robustness is achieved by coupling the feature coordinates with odometry information. The system requires a single off-the-shelf, forward-looking camera with no calibration (either external or internal, including lens distortion). Implicit calibration of the system is needed only in the form of a single controller gain. The algorithm is qualitative in nature, requiring no map of the environment, no image Jacobian, no homography, no fundamental matrix, and no assumption about a flat ground plane. Experimental results demonstrate the capability of real-time autonomous navigation in both indoor and outdoor environments and on flat, slanted, and rough terrain with dynamic occluding objects for distances of hundreds of meters. We also demonstrate that the same approach works with wide-angle and omnidirectional cameras with only slight modification.


IEEE Transactions on Speech and Audio Processing | 2005

Microphone array position calibration by basis-point classical multidimensional scaling

Stanley T. Birchfield; Amarnag Subramanya

Classical multidimensional scaling (MDS) is a global, noniterative technique for finding coordinates of points given their interpoint distances. We describe the algorithm and show how it yields a simple, inexpensive method for calibrating an array of microphones with a tape measure (or similar measuring device). We present an extension to the basic algorithm, called basis-point classical MDS (BCMDS), which handles the case when many of the distances are unavailable, thus yielding a technique that is practical for microphone arrays with a large number of microphones. We also show that BCMDS, when combined with a calibration target consisting of four synchronized sound sources, can be used for automatic calibration via time-delay estimation. We evaluate the accuracy of both classical MDS and BCMDS, investigating the sensitivity of the algorithms to noise and to the design parameters to yield insight as to the choice of those parameters. Our results validate the practical applicability of the algorithms, showing that errors on the order of 10-20 mm can be achieved in real scenarios.


computer vision and pattern recognition | 2008

Non-ideal iris segmentation using graph cuts

Shrinivas J. Pundlik; Damon L. Woodard; Stanley T. Birchfield

A non-ideal iris segmentation approach using graph cuts is presented. Unlike many existing algorithms for iris localization which extensively utilize eye geometry, the proposed approach is predominantly based on image intensities. In a step-wise procedure, first eyelashes are segmented from the input images using image texture, then the iris is segmented using grayscale information, followed by a post-processing step that utilizes eye geometry to refine the results. A preprocessing step removes specular reflections in the iris, and image gradients in a pixel neighborhood are used to compute texture. The image is modeled as a Markov random field, and a graph cut based energy minimization algorithm [2] is used to separate textured and untextured regions for eyelash segmentation, as well as to segment the pupil, iris, and background using pixel intensity values. The algorithm is automatic, unsupervised, and efficient at producing smooth segmentation regions on many non-ideal iris images. A comparison of the estimated iris region parameters with the ground truth data is provided.


international conference on acoustics, speech, and signal processing | 2005

Acoustic localization by interaural level difference

Stanley T. Birchfield; Rajitha Gangishetty

Interaural level difference (ILD) is an important cue for acoustic localization. Although its behavior has been studied extensively in natural systems, it remains an untapped resource for computer-based systems. We investigate the possibility of using ILD for acoustic localization, deriving constraints on the location of a sound source given the relative energy level of the signals received by two microphones. We then present an algorithm for computing the sound source location by combining likelihood functions, one for each microphone pair. Experimental results show that accurate acoustic localization can be achieved using ILD alone.


intelligent robots and systems | 2007

Person following with a mobile robot using binocular feature-based tracking

Zhichao Chen; Stanley T. Birchfield

We present the Binocular Sparse Feature Segmentation (BSFS) algorithm for vision-based person following with a mobile robot. BSFS uses Lucas-Kanade feature detection and matching in order to determine the location of the person in the image and thereby control the robot. Matching is performed between two images of a stereo pair, as well as between successive video frames. We use the Random Sample Consensus (RANSAC) scheme for segmenting the sparse disparity map and estimating the motion models of the person and background. By fusing motion and stereo information, BSFS handles difficult situations such as dynamic backgrounds, out-of-plane rotation, and similar disparity and/or motion between the person and background. Unlike color-based approaches, the person is not required to wear clothing with a different color from the environment. Our system is able to reliably follow a person in complex dynamic, cluttered environments in real time.


IEEE Transactions on Intelligent Transportation Systems | 2010

A Taxonomy and Analysis of Camera Calibration Methods for Traffic Monitoring Applications

Stanley T. Birchfield

Many vision-based automatic traffic-monitoring systems require a calibrated camera to compute the speeds and length-based classifications of tracked vehicles. A number of techniques, both manual and automatic, have been proposed for performing such calibration, but no study has yet focused on evaluating the relative strengths of these different alternatives. We present a taxonomy for roadside camera calibration that not only encompasses the existing methods (VVW, VWH, and VWL) but also includes several novel methods (VVH, VVL, VLH, VVD, VWD, and VHD). We also introduce an overconstrained (OC) approach that takes into account all the available measurements, resulting in reduced error and overcoming the inherent ambiguity in single-vanishing-point solutions. This important but oft-neglected ambiguity has not received the attention that it deserves; we analyze it and propose several ways of overcoming it. Our analysis includes the relative tradeoffs between two-vanishing-point solutions, single-vanishing-point solutions, and solutions that require the distance to the road to be known. The various methods are compared using simulations and experiments with real images, showing that methods that use a known length generally outperform the others in terms of error and that the OC method reduces errors even further.

Collaboration


Dive into the Stanley T. Birchfield's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge