Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Stauffer is active.

Publication


Featured researches published by Chris Stauffer.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Learning patterns of activity using real-time tracking

Chris Stauffer; W.E.L. Grimson

Our goal is to develop a visual monitoring system that passively observes moving objects in a site and learns patterns of activity from those observations. For extended sites, the system will require multiple cameras. Thus, key elements of the system are motion tracking, camera coordination, activity classification, and event detection. In this paper, we focus on motion tracking and show how one can use observed motion to learn patterns of activity in a site. Motion segmentation is based on an adaptive background subtraction method that models each pixel as a mixture of Gaussians and uses an online approximation to update the model. The Gaussian distributions are then evaluated to determine which are most likely to result from a background process. This yields a stable, real-time outdoor tracker that reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. While a tracking system is unaware of the identity of any object it tracks, the identity remains the same for the entire tracking sequence. Our system leverages this information by accumulating joint co-occurrences of the representations within a sequence. These joint co-occurrence statistics are then used to create a hierarchical binary-tree classification of the representations. This method is useful for classifying sequences, as well as individual instances of activities in a site.


computer vision and pattern recognition | 1998

Using adaptive tracking to classify and monitor activities in a site

W.E.L. Grimson; Chris Stauffer; Raquel A. Romano; Lily Lee

We describe a vision system that monitors activity in a site over extended periods of time. The system uses a distributed set of sensors to cover the site, and an adaptive tracker detects multiple moving objects in the sensors. Our hypothesis is that motion tracking is sufficient to support a range of computations about site activities. We demonstrate using the tracked motion data to calibrate the distributed sensors, to construct rough site models, to classify detected objects, to learn common patterns of activity for different object classes, and to detect unusual activities.


Versus | 1999

Video surveillance of interactions

Yuri A. Ivanov; Chris Stauffer; Aaron F. Bobick; W.E.L. Grimson

This paper describes an automatic surveillance system, which performs labeling of events and interactions in an outdoor environment. The system is designed to monitor activities in an open parking lot. It consists of three components-an adaptive tracker, an event generator, which maps object tracks onto a set of pre-determined discrete events, and a stochastic parser. The system performs segmentation and labeling of surveillance video of a parking lot and identifies person-vehicle interactions, such as pick-up and drop-off. The system presented in this paper is developed jointly by MIT Media Lab and MIT Artificial Intelligence Lab.


computer vision and pattern recognition | 2003

Automated multi-camera planar tracking correspondence modeling

Chris Stauffer; Kinh Tieu

This paper introduces a method for robustly estimating a planar tracking correspondence model (TCM) for a large camera network directly from tracking data and for employing said model to reliably track objects through multiple cameras. By exploiting the unique characteristics of tracking data, our method can reliably estimate a planar TCM in large environments covered by many cameras. It is robust to scenes with multiple simultaneously moving objects and limited visual overlap between the cameras. Our method introduces the capability of automatic calibration of large camera networks in which the topology of camera overlap is unknown and in which all cameras do not necessarily overlap. Quantitative results are shown for a five camera network in which the topology is not specified.


computer vision and pattern recognition | 2003

Estimating Tracking Sources and Sinks

Chris Stauffer

When tracking in a particular environment, objects tend to appear and disappear at certain locations. These locations may correspond to doors, garages, tunnel entrances, or even the edge of a camera view. A tracking system with knowledge of these locations is capable of improved initialization of tracking sequences, reconstitution of broken tracking sequences, and determination of tracking sequence termination. Further, knowledge of these locations is useful for activity-level descriptions of tracking sequences and for understanding relationships between non-overlapping camera views. This paper introduces a method for simultaneously solving these coupled problems: inferring the parameters of a source and sink model for a scene; and fixing broken tracking sequences and other tracking failures. A model selection criterion is also explained which allows determination of the number of sources and sinks in an environment. Results in multiple environments illustrate the effectiveness of this method.


computer vision and pattern recognition | 2001

Similarity templates for detection and recognition

Chris Stauffer; W. Eric L. Grimson

This paper investigates applications of a new representation for images, the similarity template. A similarity template is a probabilistic representation of the similarity of pixels in an image patch. It has application to detection of a class of objects, because it is reasonably invariant to the color of a particular object. Further, it enables the decomposition of a class of objects into component parts over which robust statistics of color can be approximated. These regions can be used to create a factored color model that is useful for recognition. Detection results are shown on a system that learns to detect a class of objects (pedestrians) in static scenes based on examples of the object provided automatically by a tracking system. Applications of the factored color model to image indexing and anomaly detection are pursued on a database of images of pedestrians.


workshop on applications of computer vision | 2005

Learning to Track Objects Through Unobserved Regions

Chris Stauffer

As tracking systems become more effective at reliably tracking multiple objects over extended periods of time within single camera views and across overlapping camera views, increasing attention is being focused on tracking objects through periods where they are not observed. This paper investigates an unsupervised hypothesis testing method for learning the characteristics of objects passing unobserved from one observed location to another. This method not only reliably determines whether objects predictably pass from one location to another without performing explicit correspondence, but it approximates the likelihood of those transitions. It is robust to non-stationary traffic processes that result from traffic lights, vehicle grouping, and other non-linear vehicle-vehicle interactions. Synthetic data allows us to test and verify our results for complex traffic situations over multiple city blocks and contrast it with previous approaches.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

SeeCoast port surveillance

Michael Seibert; Bradley J. Rhodes; Neil A. Bomberger; Patricia O. Beane; Jason J. Sroka; Wendy Kogel; William Kreamer; Chris Stauffer; Linda Kirschner; Edmond Chalom; Michael Bosse; Robert Tillson

SeeCoast extends the US Coast Guard Port Security and Monitoring system by adding capabilities to detect, classify, and track vessels using electro-optic and infrared cameras, and also uses learned normalcy models of vessel activities in order to generate alert cues for the watch-standers when anomalous behaviors occur. SeeCoast fuses the video data with radar detections and Automatic Identification System (AIS) transponder data in order to generate composite fused tracks for vessels approaching the port, as well as for vessels already in the port. Then, SeeCoast applies rule-based and learning-based pattern recognition algorithms to alert the watch-standers to unsafe, illegal, threatening, and other anomalous vessel activities. The prototype SeeCoast system has been deployed to Coast Guard sites in Virginia. This paper provides an overview of the system and outlines the lessons learned to date in applying data fusion and automated pattern recognition technology to the port security domain.


computer vision and pattern recognition | 1999

Automatic hierarchical classification using time-based co-occurrences

Chris Stauffer

While a tracking system is unaware of the identity of any object it tracks, the identity remains the same for the entire tracking sequence. Our system leverages this information by using accumulated joint cooccurrences of the representations within the sequence to create a hierarchical binary-tree classifier of the representations. This classifier is useful to classify sequences as well as individual instances. We illustrate the use of this method on two separate representations the tracked objects position, movement, and size; and the tracked objects binary motion silhouettes.


workshop on applications of computer vision | 2013

Real-time tracking of low-resolution vehicles for wide-area persistent surveillance

Mark A. Keck; Luis Galup; Chris Stauffer

Live wide-area persistent surveillance (WAPS) systems must provide effective multi-target tracking on downlinked video streams in real-time. This paper presents the first published aerial tracking system that is documented to process over 100 megapixels per second. The implementation addresses the challenges with the mosaicked, low-resolution, grayscale NITF imagery provided by most currently fielded WAPS platforms and the flexible computation architecture required to provide real-time performance. This paper also provides ground-truth for repeatable evaluation of wide-area persistent surveillance on a 2009 dataset collected by AFRL [1] that is available to the public as well as a quantitative analysis of this real-time implementation. To our knowledge, this is the only publication that (1) provides details of a real-time implementation for detection and tracking in (2) mosaicked, composed imagery from a fielded WAPS sensor, and (3) provides annotation data and quantitative analysis for repeatable WAPS tracking experimentation in the computer vision community.

Collaboration


Dive into the Chris Stauffer's collaboration.

Top Co-Authors

Avatar

Kinh Tieu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

W. Eric L. Grimson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

W.E.L. Grimson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lily Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Boris Katz

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik G. Miller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge