Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Valentine is active.

Publication


Featured researches published by Brian Valentine.


computer vision and pattern recognition | 2007

Multimodal Mean Adaptive Backgrounding for Embedded Real-Time Video Surveillance

Senyo Apewokin; Brian Valentine; Linda M. Wills; D. Scott Wills; Antonio Gentile

Automated video surveillance applications require accurate separation of foreground and background image content. Cost sensitive embedded platforms place realtime performance and efficiency demands on techniques to accomplish this task. In this paper we evaluate pixel-level foreground extraction techniques for a low cost integrated surveillance system. We introduce a new adaptive technique, multimodal mean (MM), which balances accuracy, performance, and efficiency to meet embedded system requirements. Our evaluation compares several pixel-level foreground extraction techniques in terms of their computation and storage requirements, and functional accuracy for three representative video sequences. The proposed MM algorithm delivers comparable accuracy of the best alternative (Mixture of Gaussians) with a 6X improvement in execution time and an 18% reduction in required storage.


Archive | 2009

Embedded Real-Time Surveillance Using Multimodal Mean Background Modeling

Senyo Apewokin; Brian Valentine; Dana Forsthoefel; Linda M. Wills; Scott Wills; Antonio Gentile

Automated video surveillance applications require accurate separation of foreground and background image content. Cost-sensitive embedded platforms place real-time performance and efficiency demands on techniques to accomplish this task. In this chapter, we evaluate pixel-level foreground extraction techniques for a low-cost integrated surveillance system. We introduce a new adaptive background modeling technique, multimodal mean (MM), which balances accuracy, performance, and efficiency to meet embedded system requirements. Our evaluation compares several pixel-level foreground extraction techniques in terms of their computation and storage requirements, and functional accuracy for three representative video sequences. The proposed MM algorithm delivers comparable accuracy of the best alternative (mixture of Gaussians) with a 6× improvement in execution time and an 18% reduction in required storage on an eBox-2300 embedded platform.


computer vision and pattern recognition | 2008

Tracking multiple pedestrians in real-time using kinematics

Senyo Apewokin; Brian Valentine; R. Bales; Linda M. Wills; Scott Wills

We present an algorithm for real-time tracking of multiple pedestrians in a dynamic scene. The algorithm is targeted for embedded systems and reduces computational and storage costs by using an inexpensive kinematic tracking model with only fixed-point arithmetic representations. Our algorithm leverages from the observation that pedestrians in a dynamic scene tend to move with uniform speed over a small number of consecutive frames. We use a multimodal background modeling technique to accurately segment the foreground (moving people) from the background. We then use connectivity analysis to identify blobs in the foreground and calculate the center of mass of each blob. Finally, we establish correspondence between the center of mass of each blob in the current frame with center of mass information gathered from the two immediately preceding frames. We evaluate our algorithm on a real outdoor video sequence taken with an inexpensive webcam. Our implementation successfully tracks each pedestrian from frame to frame in real-time. Our algorithm performs well in challenging situations resulting from occlusion and crowded conditions, running on an eBox-2300 Thin Client VESA PC.


Eurasip Journal on Image and Video Processing | 2011

Bigbackground-based illumination compensation for surveillance video

M. Ryan Bales; Dana Forsthoefel; Brian Valentine; D. ScottWills; Linda M. Wills

Illumination changes cause challenging problems for video surveillance algorithms, as objects of interest become masked by changes in background appearance. It is desired for such algorithms to maintain a consistent perception of a scene regardless of illumination variation. This work introduces a concept we call BigBackground, which is a model for representing large, persistent scene features based on chromatic self-similarity. This model is found to comprise 50% to 90% of surveillance scenes. The large, stable regions represented by the model are used as reference points for performing illumination compensation. The presented compensation technique is demonstrated to decrease improper false-positive classification of background pixels by an average of 83% compared to the uncompensated case and by 25% to 43% compared to compensation techniques from the literature.


advanced video and signal based surveillance | 2007

Midground object detection in real world video scenes

Brian Valentine; Senyo Apewokin; Linda M. Wills; D. Scott Wills; Antonio Gentile

Traditional video scene analysis depends on accurate background modeling to identify salient foreground objects. However, in many important surveillance applications, saliency is defined by the appearance of a new non-ephemeral object that is between the foreground and background. This midground realm is defined by a temporal window following the objects appearance; but it also depends on adaptive background modeling to allow detection with scene variations (e.g., occlusion, small illumination changes). The human visual system is ill-suited for midground detection. For example, when surveying a busy airline terminal, it is difficult (but important) to detect an unattended bag which appears in the scene. This paper introduces a midground detection technique which emphasizes computational and storage efficiency. The approach uses a new adaptive, pixel-level modeling technique derived from existing backgrounding methods. Experimental results demonstrate that this technique can accurately and efficiently identify midground objects in real-world scenes, including PETS2006 and AVSS2007 challenge datasets.


Computer Vision and Image Understanding | 2010

An efficient, chromatic clustering-based background model for embedded vision platforms

Brian Valentine; Senyo Apewokin; Linda M. Wills; D. Scott Wills

People naturally identify rapidly moving foreground and ignore persistent background. Identifying background pixels belonging to stable, chromatically clustered objects is important for efficient scene processing. This paper presents a technique that exploits this facet of human perception to improve performance and efficiency of background modeling on embedded vision platforms. Previous work on the Multimodal Mean (MMean) approach achieves high quality foreground extraction (comparable to Mixture of Gaussians (MoG)) using fast integer computation and a compact memory representation. This paper introduces a more efficient hybrid technique that combines MMean with palette-based background matching based on the chromatic distribution in the scene. This hybrid technique suppresses computationally expensive model update and adaptation, providing a 45% execution time speedup over MMean. It reduces model storage requirements by 58% over a MMean-only implementation. This background analysis enables higher frame rate, lower cost embedded vision systems.


Journal of Multimedia | 2010

Cat-tail dma: efficient image data transport for multicore embedded mobile systems

Senyo Apewokin; Brian Valentine; Linda M. Wills; D. Scott Wills

Active contour methods can be used to segment a 3D mesh into parts by iteratively moving the contour to the mesh region that minimizes the contour energy. However, as the contour moves, it often does not lie on the mesh surface. To address this problem, existing methods use either vertex/edge projection or mesh parameterization to obtain the corresponding contour on the mesh surface. Although vertex/edge projection methods are simple, they may create unwanted loops along the projected contour due to irregular mesh connectivity or modeling noise. Extra operations, which are often complex, are needed to remove such loops. On the other hand, mesh parameterization suffers from distortion and out-of-range problems, which are not trivial to solve. In this paper, we propose a face projection method to address the above problems. Our experiments show that the proposed method produces much smoother, more consistent and accurate projected contours than existing methods. At the end of the paper, we also show some multimedia applications of our method. Index Terms — 3D active contours, contour projection, mesh parameterization, mesh segmentation.


signal processing systems | 2011

Real-Time Adaptive Background Modeling for Multicore Embedded Systems

Senyo Apewokin; Brian Valentine; Jee Choi; Linda M. Wills; D. Scott Wills

Current trends in microprocessor design integrate several autonomous processing cores onto the same die. These multicore architectures are particularly well-suited for computer vision applications, where it is typical to perform the same set of operations repeatedly over large datasets. These memory- and computation-intensive applications can reap tremendous performance and accuracy benefits from concurrent execution on multi-core processors. However, cost-sensitive embedded platforms place real-time performance and efficiency demands on techniques to accomplish this task. Furthermore, parallelization and partitioning techniques that allow the application to fully leverage the processing capabilities of each computing core are required for multi-core embedded vision systems. In this paper, we evaluate background modeling techniques on a multicore embedded platform, since this process dominates the execution and storage costs of common video analysis workloads. We introduce a new adaptive backgrounding technique, multimodal mean, which balances accuracy, performance, and efficiency to meet embedded system requirements. Our evaluation compares several pixel-level background modeling techniques in terms of their computation and storage requirements, and functional accuracy for three representative video sequences, across a range of processing and parallelization configurations. We show that the multimodal mean algorithm delivers comparable accuracy of the best alternative (Mixture of Gaussians) with a 3.4× improvement in execution time and a 50% reduction in required storage for optimal block processing on each core. In our analysis of several processing and parallelization configurations, we show how this algorithm can be optimized for embedded multicore performance, resulting in a 25% performance improvement over the baseline processing method.


machine vision applications | 2008

Edge noise removal in multimodal background modeling techniques

Jee Choi; Senyo Apewokin; Brian Valentine; D. S. Wills; Linda M. Wills

Traditional video scene analysis depends on accurate background modeling techniques to segment objects of interest. Multimodal background models such as Mixture of Gaussian (MOG) and Multimodal Mean (MM) are capable of handling dynamic scene elements and incorporating new objects into the background. Due to the adaptive nature of these techniques, new pixels have to be observed consistently over time before they can be incorporated into the background. However, pixels in the boundary between two colors tend to fluctuate more, creating false positive pixels that result in less accurate foreground segmentation. To correct this, a simple and computationally efficient edge detection based algorithm is proposed. On average, approximately 70 percent of these false positives can be eliminated with little computational overhead.


international conference on distributed smart cameras | 2008

Bypassing BigBackground: An efficient hybrid background modeling algorithm for embedded video surveillance

Brian Valentine; Jee Choi; Senyo Apewokin; Linda M. Wills; D. Scott Wills

As computer vision algorithms move to embedded platforms within distributed smart camera systems, greater attention must be placed on the efficient use of storage and computational resources. Significant savings can be made in background modeling by identifying large areas that are homogenous in color and sparse in activity. This paper presents a pixel-based background model that identifies such areas, called BigBackground, from a single image frame for fast processing and efficient memory usage. We use a small 15 color palette to identify and represent BigBackground colors. Results on a variety of outdoor and standard test sequences show that our algorithm performs in real-time on an embedded processing platform (the eBox-2300) with reliable background/foreground segmentation accuracy.

Collaboration


Dive into the Brian Valentine's collaboration.

Top Co-Authors

Avatar

Linda M. Wills

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Senyo Apewokin

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

D. Scott Wills

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jee Choi

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dana Forsthoefel

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Scott Wills

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

D. S. Wills

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

D. ScottWills

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M. Ryan Bales

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge