Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Harwood is active.

Publication


Featured researches published by David Harwood.


Pattern Recognition | 1996

A comparative study of texture measures with classification based on featured distributions

Timo Ojala; Matti Pietikäinen; David Harwood

This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

W/sup 4/: real-time surveillance of people and their activities

Ismail Haritaoglu; David Harwood; Larry S. Davis

W/sup 4/ is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. W/sup 4/ employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of peoples appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. W/sup 4/ can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. W/sup 4/ can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320/spl times/240 resolution images on a 400 MHz dual-Pentium II PC.


european conference on computer vision | 2000

Non-parametric Model for Background Subtraction

Ahmed M. Elgammal; David Harwood; Larry S. Davis

Background subtraction is a method typically used to segment moving regions in image sequences taken from a static camera by comparing each new frame to a model of the scene background. We present a novel non-parametric background model and a background subtraction approach. The model can handle situations where the background of the scene is cluttered and not completely static but contains small motions such as tree branches and bushes. The model estimates the probability of observing pixel intensity values based on a sample of intensity values for each pixel. The model adapts quickly to changes in the scene which enables very sensitive detection of moving targets. We also show how the model can use color information to suppress detection of shadows. The implementation of the model runs in real-time for both gray level and color imagery. Evaluation shows that this approach achieves very sensitive detection with very low false alarm rates.


Proceedings of the IEEE | 2002

Background and foreground modeling using nonparametric kernel density estimation for visual surveillance

Ahmed M. Elgammal; Ramani Duraiswami; David Harwood; Larry S. Davis

Automatic understanding of events happening at a site is the ultimate goal for many visual surveillance systems. Higher level understanding of events requires that certain lower level computer vision tasks be performed. These may include detection of unusual motion, tracking targets, labeling body parts, and understanding the interactions between people. To achieve many of these tasks, it is necessary to build representations of the appearance of objects in the scene. This paper focuses on two issues related to this problem. First, we construct a statistical representation of the scene background that supports sensitive detection of moving objects in the scene, but is robust to clutter arising out of natural scene variations. Second, we build statistical representations of the foreground regions (moving objects) that support their tracking and support occlusion reasoning. The probability density functions (pdfs) associated with the background and foreground are likely to vary from image to image and will not in general have a known parametric form. We accordingly utilize general nonparametric kernel density estimation techniques for building these statistical representations of the background and the foreground. These techniques estimate the pdf directly from the data without any assumptions about the underlying distributions. Example results from applications are presented.


Real-time Imaging | 2005

Real-time foreground-background segmentation using codebook model

Kyungnam Kim; Thanarat H. Chalidabhongse; David Harwood; Larry S. Davis

We present a real-time algorithm for foreground-background segmentation. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. The codebook representation is efficient in memory and speed compared with other background modeling techniques. Our method can handle scenes containing moving backgrounds or illumination variations, and it achieves robust detection for different types of videos. We compared our method with other multimode modeling techniques. In addition to the basic algorithm, two features improving the algorithm are presented-layered modeling/detection and adaptive codebook updating. For performance evaluation, we have applied perturbation detection rate analysis to four background subtraction algorithms and two videos of different types of scenes.


ieee international conference on automatic face and gesture recognition | 1998

W/sup 4/: Who? When? Where? What? A real time system for detecting and tracking people

Ismail Haritaoglu; David Harwood; Larry S. Davis

W/sup 4/ is a real time visual surveillance system for detecting and tracking people and monitoring their activities in an outdoor environment. It operates on monocular grayscale video imagery, or on video imagery from an infrared camera. Unlike many of the systems for tracking people, W/sup 4/ makes no use of color cues; instead, W/sup 4/ employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of peoples appearance so that they can be tracked through interactions such as occlusions. W/sup 4/ is capable of simultaneously tracking multiple people even with occlusion. It runs at 25 Hz for 320/spl times/240 resolution images on a dual pentium PC.


international conference on pattern recognition | 1994

Performance evaluation of texture measures with classification based on Kullback discrimination of distributions

Timo Ojala; Matti Pietikäinen; David Harwood

This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented.


international conference on computer vision | 2009

Human detection using partial least squares analysis

William Robson Schwartz; Aniruddha Kembhavi; David Harwood; Larry S. Davis

Significant research has been devoted to detecting people in images and videos. In this paper we describe a human detection method that augments widely used edge-based features with texture and color information, providing us with a much richer descriptor set. This augmentation results in an extremely high-dimensional feature space (more than 170,000 dimensions). In such high-dimensional spaces, classical machine learning algorithms such as SVMs are nearly intractable with respect to training. Furthermore, the number of training samples is much smaller than the dimensionality of the feature space, by at least an order of magnitude. Finally, the extraction of features from a densely sampled grid structure leads to a high degree of multicollinearity. To circumvent these data characteristics, we employ Partial Least Squares (PLS) analysis, an efficient dimensionality reduction technique, one which preserves significant discriminative information, to project the data onto a much lower dimensional subspace (20 dimensions, reduced from the original 170,000). Our human detection system, employing PLS analysis over the enriched descriptor set, is shown to outperform state-of-the-art techniques on three varied datasets including the popular INRIA pedestrian dataset, the low-resolution gray-scale DaimlerChrysler pedestrian dataset, and the ETHZ pedestrian dataset consisting of full-length videos of crowded scenes.


computer vision and pattern recognition | 1998

W/sup 4/: A Real Time System for Detecting and Tracking People

Ismail Haritaoglu; David Harwood; Larry S. Davis

W 4 is a real time visual surveillance system for detecting and tracking people and monitoring their activities in an outdoor environment. It operates on monocular grayscale video imagery, or on video imagery from an infrared camera. Unlike many of systems for tracking people, W 4 makes no use of color cues. Instead, W 4 employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of peoples appearance so that they can be tracked through interactions such as occlusions. W 4 is capable of simultaneously tracking multiple people even with occlusion. It runs at 25 Hz for 320x240 resolution images on a dual-pentium PC.


international conference on image processing | 2004

Background modeling and subtraction by codebook construction

Kyungnam Kim; Thanarat H. Chalidabhongse; David Harwood; Larry S. Davis

We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.

Collaboration


Dive into the David Harwood's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sotirios G. Ziavras

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thor Bestul

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge