Todd Schoepflin
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Todd Schoepflin.
international conference on intelligent transportation systems | 2003
Todd Schoepflin; Daniel J. Dailey
In this paper, we present a new three-stage algorithm to calibrate roadside traffic management cameras and track vehicles to create a traffic speed sensor. The algorithm first estimates the camera position relative to the roadway using the motion and edges of the vehicles. Given the camera position, the algorithm then calibrates the camera by estimating the lane boundaries and the vanishing point of the lines along the roadway. The algorithm transforms the image coordinates from the vehicle tracker into real-world coordinates using our simplified camera model. We present results that demonstrate the ability of our algorithm to produce good estimates of the mean vehicle speed in a lane of traffic.
IEEE Transactions on Circuits and Systems for Video Technology | 2001
Todd Schoepflin; Vikram Chalana; David R. Haynor; Yongmin Kim
We have developed a new contour-based tracking algorithm that uses a sequence of template deformations to model and track generic video objects. We organize the deformations into a hierarchy: globally affine deformations, piecewise (locally) affine deformations, and arbitrary smooth deformations (snakes). This design enables the algorithm to track objects whose pose and shape change in time compared to the template. If the object is not a rigid body, we model the temporal evolution of its shape by updating the entire template after each video frame; otherwise, we only update the pose of the object. Experimental results demonstrate that our method is able to track a variety of video objects, including those undergoing rapid changes. We quantitatively compare our algorithm with its constituent pieces (e.g., the snake algorithm) and show that the complete algorithm can track objects with moving parts for a longer duration than partial versions of the hierarchy. It could be benefited from a higher level algorithm to dynamically adjust the parameters and template deformations to improve the segmentation accuracy further. The hierarchical nature of this algorithm provides a framework that offers a modular approach for the design and enhancement of future object-tracking algorithms.
ieee intelligent vehicles symposium | 2004
Todd Schoepflin; Daniel J. Dailey
In this paper we present a simplified model for traffic management cameras and a calibration method based on a known distance along the road. We then describe how to estimate this interval from the images using an autocorrelation method applied to lane marker features. Assuming the camera has been calibrated and the vehicle lanes have been identified, we also present a method to track a group of vehicles in a lane and estimate the space mean speed using a cross-correlation technique. The algorithm is appropriate for building a speed sensor with fine time resolution (i.e., 200 ms); 20-second averages are shown to be equivalent to data from two different inductance loops. The results for several test cases show that the speed estimation method performs well under a variety of challenging weather, lighting, and traffic conditions.
international conference on intelligent transportation systems | 2002
Todd Schoepflin; Daniel J. Dailey
We present an algorithm to calibrate roadside traffic management cameras to create a traffic speed sensor. We present a simplified camera model that can produce good single vehicle speed estimates.
Transportation Research Record | 2003
Todd Schoepflin; Daniel J. Dailey
A new algorithm is presented for estimating speed from roadside cameras in uncongested traffic, congested traffic, favorable weather conditions, and adverse weather conditions. Individual vehicle lanes are identified and horizontal vehicle features are emphasized by using a gradient operator. The features are projected into a one-dimensional subspace and transformed into a linear coordinate system by using a simple camera model. A correlation technique is used to summarize the movement of features through a group of images and estimate mean speed for each lane of vehicles.
international conference on intelligent transportation systems | 2007
Todd Schoepflin; Daniel J. Dailey
In this paper we present a simplified model for traffic management cameras and a calibration method based on a known distance along the road. We then describe how to estimate this interval from the images using an autocorrelation method applied to lane marker features. Assuming the camera has been calibrated and the vehicle lanes have been identified, we also present a method to track a group of vehicles in a lane and estimate the space mean speed using a cross-correlation technique. The algorithm is appropriate fofur building a speed sensor with fine time resolution (i.e., 200 ms); 20-second averages are shown to be equivalent to data from two different inductance loops. The results for several test cases show that the speed estimation method performs well under a variety of challenging weather, lighting, and traffic conditions.
Transportation Research Record | 2004
Todd Schoepflin; Daniel J. Dailey
An algorithm to estimate speed from traffic surveillance cameras in a variety of traffic congestion, weather, and lighting conditions is presented. The features from the images are projected into a one-dimensional sub-space and transformed into a linear coordinate system by using a simplified camera model. A cross-correlation technique is used to summarize the movement of features through a group of images and to estimate mean speed for each lane of vehicles. A Kalman filter technique with a set of maximum-likelihood optimal parameters is used to estimate the traffic speed by lane to create an optimal space-averaged speed.
visual communications and image processing | 2000
Todd Schoepflin; Christopher Lau; Rohit Garg; Donglok Kim; Yongmin Kim
We present an integrated research environment (RAVEN) that we have developed for the purpose of developing and testing object tracking algorithms. As a Windows application, RAVEN provides a user interface for loading and viewing video sequences and interacting with the segmentation and object tracking algorithms, which are included at run time as plug- ins. The plug-ins interact with RAVEN via a programming interface, enabling algorithm developers to concentrate on their ideas rather than on the user interface. Over the past two years. RAVEN has greatly enhanced the productivity of our researchers, enabling them to create a variety of new algorithms and extended RAVENs capabilities via plug-ins. Examples include several object tracking algorithms, a live- wire segmentation algorithm, a methodology for the evaluation of segmentation quality, and even a mediaprocessor implementation of an object tracker. After implementing an algorithm, RAVEN makes it easy to present the results since it provides several mask display modes and output options for both image and video. We have found that RAVEN facilitates the entire research process, from prototyping an algorithm to visualization of the results to a mediaprocessor implementation.
IEEE Transactions on Circuits and Systems for Video Technology | 2001
Hyun Wook Park; Todd Schoepflin; Yongmin Kim
Archive | 2001
Todd Schoepflin; David R. Haynor; John D. Sahr; Yongmin Kim