Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Glen William Brooksby is active.

Publication


Featured researches published by Glen William Brooksby.


computer vision and pattern recognition | 2006

Multi-Object Tracking Through Simultaneous Long Occlusions and Split-Merge Conditions

A. G. Amitha Perera; Chukka Srinivas; Anthony Hoogs; Glen William Brooksby; Wensheng Hu

A fundamental requirement for effective automated analysis of object behavior and interactions in video is that each object must be consistently identified over time. This is difficult when the objects are often occluded for long periods: nearly all tracking algorithms will terminate a track with loss of identity on a long gap. The problem is further confounded by objects in close proximity, tracking failures due to shadows, etc. Recently, some work has been done to address these issues using higher level reasoning, by linking tracks from multiple objects over long gaps. However, these efforts have assumed a one-to-one correspondence between tracks on either side of the gap. This is often not true in real scenarios of interest, where the objects are closely spaced and dynamically occlude each other, causing trackers to merge objects into single tracks. In this paper, we show how to efficiently handle splitting and merging during track linking. Moreover, we show that we can maintain the identities of objects that merge together and subsequently split. This enables the identity of objects to be maintained throughout long sequences with difficult conditions. We demonstrate our approach on a highly challenging, oblique-view video sequence of dense traffic of a highway interchange. We successfully track the large majority of the hundreds of moving vehicles in the scene, many in close proximity, through long occlusions and shadows.


computer vision and pattern recognition | 2005

A unified framework for tracking through occlusions and across sensor gaps

Robert August Kaucic; A. G. Amitha Perera; Glen William Brooksby; John P. Kaufhold; Anthony Hoogs

A common difficulty encountered in tracking applications is how to track an object that becomes totally occluded, possibly for a significant period of time. Another problem is how to associate objects, or tracklets, across non-overlapping cameras, or between observations of a moving sensor that switches fields of regard. A third problem is how to update appearance models for tracked objects over time. As opposed to using a comprehensive multi-object tracker that must simultaneously deal with these tracking challenges, we present a novel, modular framework that handles each of these problems in a unified manner by the initialization, tracking, and linking of high-confidence tracklets. In this track/suspend/match paradigm, we first analyze the scene to identify areas where tracked objects are likely to become occluded. Tracking is then suspended on occluded objects and re-initiated when they emerge from behind the occlusion. We then associate, or match, suspended tracklets with the new tracklets using full kinematic models for object motion and Gibbsian distributions for object appearance in order to complete the track through the occlusion. Sensor gaps are handled in a similar manner, where tracking is suspended when the sensor looks away and then re-initiated when the sensor returns. Changes in object appearance and orientation during tracking are also seamlessly handled in this framework. Tracklets with low lock scores are terminated. Tracking then resumes on untracked movers with corresponding updated appearance models. These new tracklets are then linked back to the terminated ones as appropriate. Fully automatic tracking results from a moving sensor are presented.


applied imagery pattern recognition workshop | 2006

Evaluation of Algorithms for Tracking Multiple Objects in Video

A. G. Amitha Perera; Anthony Hoogs; Chukka Srinivas; Glen William Brooksby; Wensheng Hu

As video tracking research matures, the issue of tracker performance evaluation has emerged as a research topic in its own right, as evidenced by a series of workshops devoted solely to this purpose (the workshops on performance evaluation of tracking and surveillance-PETS). However, evaluations such as PETS have been limited to small scenarios with a handful of moving objects. In this paper, we present an evaluation methodology and set of experiments focused on large-scale video tracking problems with hundreds of objects in close proximity. The scale and complexity of this data exposes a number of issues. First, the association of computed tracks to image-truth tracks may have multiple plausible solutions, resulting in a combinatorial grouping problem that must be solved with an approximate solution. Second, computed tracks may be only partially correct, complicating the association problem further and indicating that multiple measures are required to characterize performance. We have created a system that associates computed tracks to manually-generated image-truth tracks, and calculates various measures such as the per-frame probability of detection, false alarm rate, and fragmentation, which is the number of computed tracks associated to a single track. We also normalize fragmentation by track length to reward fewer computed tracks for longer true tracks. The measures were used to compare three tracking methods on an aerial video sequence containing hundreds of objects, long occlusions, and deep shadows.


ieee workshop on motion and video computing | 2008

Detecting Semantic Group Activities Using Relational Clustering

Anthony Hoogs; Steve Bush; Glen William Brooksby; A. G. Amitha Perera; Mark Edward Dausch; Nils Krahnstoever

Existing approaches to detect modeled activities in video often require the precise specification of the number of actors or roles, or spatial constraints, or other limitations that create difficulties for generic detection of group activities. We develop an approach to detect group behaviors in video, where an arbitrary number of participants are involved. We address scene conditions with non-participating objects, an arbitrary number of instances of the behaviors of interest, and arbitrary locations for those instances. Our approach uses semantic spatio-temporal predicates to define activities, and relational clustering to identify groups of objects for which the relational predicates are mutually true over time. The algorithm handles conditions where object segmentation and tracking are highly unreliable, such as busy scenes with occluders. Results are shown for the group activities of crowd formation and dispersal on low-resolution, far-field video surveillance data.


computer vision and pattern recognition | 2006

Moving Object Segmentation using Scene Understanding

A. G. Amitha Perera; Glen William Brooksby; Anthony Hoogs; Gianfranco Doretto

We present a novel approach to moving object detection in video taken from a translating, rotating and zooming sensor, with a focus on detecting very small objects in as few frames as possible. The primary innovation is to incorporate automatically computed scene understanding of the video directly into the motion segmentation process. Scene understanding provides spatial and semantic context that is used to improve frame-to-frame homography computation, as well as direct reduction of false alarms. The method can be applied to virtually any motion segmentation algorithm, and we explore its utility for three: frame differencing, tensor voting, and generalized PCA. The approach is especially effective on sequences with large scene depth and much parallax, as often occurs when the sensor is close to the scene. In one difficult sequence, our results show an 8-fold reduction of false positives on average, with essentially no impact on the true positive rate. We also show how scene understanding can be used to increase the accuracy of frame-to-frame homography estimates.


Optical Measurement Systems for Industrial Inspection IV | 2005

Improving 3D surface measurement accuracy on metallic surfaces

Raghu Kokku; Glen William Brooksby

3D surface measurement of machine parts is challenging with the increasing demands for micron level measurement accuracy and speed. Optical Metrology based techniques using stereovision face unique challenges in feature extraction due to the complexity of the machine parts and surface finish. For complicated parts, structured laser light is projected on the surface to generate unique or reference features for stereo reconstruction. The induced laser light on the surface is scattered due varies surface phenomena (light scattering, multiple reflections). These scattered and diffused laser lines induce new features on surface, which misguides the surface reconstruction. While targeting micron level accuracy, sub-pixel feature extraction is also effected by the speckle noise, biasing due to sampling, shape etc. In this paper, we propose new method of improving the accuracy of 3D surface reconstruction on metallic shining surfaces. The proposed template based guidance approach with tangent based feature extraction improves the accuracy of detection in the effected regions by 30%.


ieee/aiaa digital avionics systems conference | 2011

Air-ground trajectory synchronization — Metrics and simulation results

David So Keung Chan; Glen William Brooksby; Joachim Karl Ulf Hochwarth; Joel Kenneth Klooster; Sergio Torres

It has been established that Trajectory Based Operations are a key component of future Air Traffic Management systems as currently underway in the United States with NextGen and Europe with SESAR. One of the major goals of Trajectory Based Operations is to provide participants accurate 4-Dimensional Trajectories predicting the future location of the aircraft with a high level of certainty. This is not realizable without improving the coordination and interoperability of air and ground systems. By leveraging GEs Flight Management System and aircraft expertise with Lockheed Martins Air Traffic Control domain expertise, including the En Route Automation Modernization system, a research initiative has been formed to explore and evaluate means of better integrating air and ground systems to bring airspace operations closer to the business-optimal goal in a safe and efficient manner. The two main components of this effort are trajectory synchronization and trajectory negotiation. Trajectory synchronization will essentially result in a more complete flight plan in the air and a more accurate trajectory representation on the ground, which is a prerequisite for trajectory negotiation. This paper briefly discusses the high-level trajectory synchronization algorithm and its implementation in a fast-time simulation environment that incorporates actual Flight Management and Air Traffic Control software. It then focuses on the analysis of metrics and simulation results from several case studies. The conclusion of these studies shows that implementation of the trajectory synchronization algorithm using Controller-Pilot Data Link Communications messages as well as the Automatic Dependent Surveillance-Contract service (including the Extended Projected Profile application) achieves consistent trajectory predictions between the air and ground systems.


applied imagery pattern recognition workshop | 2009

Objective performance evaluation of a moving object super-resolution system

J. Brandon Laflen; Christopher R. Greco; Glen William Brooksby; Eamon B. Barrett

We present evaluation of the performance of moving object super-resolution (MOSR) through objective image quality metrics. MOSR systems require detection, tracking, and local sub-pixel registration of objects of interest, prior to superresolution. Nevertheless, MOSR can provide additional information otherwise undetected in raw video. We measure the extent of this benefit through the following objective image quality metrics: (1) Modulation Transfer Function (MTF), (2) Subjective Quality Factor (SQF), (3) Image Quality from the Natural Scene (MITRE IQM), and (4) minimum resolvable Rayleigh distance (RD). We also study the impact of non-ideal factors, such as image noise, frame-to-frame jitter, and object rotation, upon this performance. To study these factors, we generated controlled sequences of synthetic images of targets moving against a random field. The targets exemplified aspects of the objective metrics, containing either horizontal, vertical, or circular sinusoidal gratings, or a field of impulses separated by varying distances. High-resolution sequences were rendered and then appropriately filtered assuming a circular aperture and square, filled collector prior to decimation. A fully implemented MOSR system was used to generate super-resolved images of the moving targets. The MTF, SQF, IQM, and RD measures were acquired from each of the high, low, and super-resolved image sequences, and indicate the objective benefit of super-resolution. To contrast with MOSR, the low-resolution sequences were also up-sampled in the Fourier domain, and the objective measures were collected for these Fourier up-sampled sequences, as well. Our study consisted of over 800 different sequences, representing various combinations of non-ideal factors.


Archive | 2011

Video Analytics for Force Protection

Peter Henry Tu; Glen William Brooksby; Gianfranco Doretto; Donald Wagner Hamilton; Nils Krahnstoever; J. Brandon Laflen; Xiaoming Liu; Kedar Anil Patwardhan; Thomas B. Sebastian; Yan Tong; Jilin Tu; Frederick Wilson Wheeler; Christopher Michael Wynnyk; Yi Yao; Ting Yu

For troop and military installation protection, modern computer vision methods must be harnessed to enable a comprehensive approach to contextual awareness. In this chapter we present a collection of intelligent video technologies currently under development at the General Electric Global Research Center, which can be applied to this challenging problem. These technologies include: aerial analysis for object detection and tracking, site-wide tracking from networks of fixed video cameras, person detection from moving platforms, biometrics at a distance and facial analysis for the purposes of inferring intent. We hypothesize that a robust approach to troop protection will require the synthesis of all of these technologies into a comprehensive system of systems.


Proceedings of SPIE | 2010

A comparative study of four change detection methods for aerial photography applications

Gil Abramovich; Glen William Brooksby; Stephen F. Bush; Swaminathan Manickam; Özge Can Özcanli; Benjamin D. Garrett

We present four new change detection methods that create an automated change map from a probability map. In this case, the probability map was derived from a 3D model. The primary application of interest is aerial photographic applications, where the appearance, disappearance or change in position of small objects of a selectable class (e.g., cars) must be detected at a high success rate in spite of variations in magnification, lighting and background across the image. The methods rely on an earlier derivation of a probability map. We describe the theory of the four methods, namely Bernoulli variables, Markov Random Fields, connected change, and relaxation-based segmentation, evaluate and compare their performance experimentally on a set probability maps derived from aerial photographs.

Collaboration


Dive into the Glen William Brooksby's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge