Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gregory D. Hager is active.

Publication


Featured researches published by Gregory D. Hager.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

Efficient region tracking with parametric models of geometry and illumination

Gregory D. Hager; Peter N. Belhumeur

As an object moves through the field of view of a camera, the images of the object may change dramatically. This is not simply due to the translation of the object across the image plane; complications arise due to the fact that the object undergoes changes in pose relative to the viewing camera, in illumination relative to light sources, and may even become partially or fully occluded. We develop an efficient general framework for object tracking, which addresses each of these complications. We first develop a computationally efficient method for handling the geometric distortions produced by changes in pose. We then combine geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes. Finally, we augment these methods with techniques from robust statistics and treat occluded regions on the object as statistical outliers. Experimental results are given to demonstrate the effectiveness of our methods.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Advances in computational stereo

Myron Z. Brown; Darius Burschka; Gregory D. Hager

Extraction of three-dimensional structure of a scene from stereo images is a problem that has been studied by the computer vision community for decades. Early work focused on the fundamentals of image correspondence and stereo geometry. Stereo research has matured significantly throughout the years and many advances in computational stereo continue to be made, allowing stereo to be applied to new and more demanding problems. We review recent advances in computational stereo, focusing primarily on three important topics: correspondence methods, methods for occlusion, and real-time implementations. Throughout, we present tables that summarize and draw distinctions among key ideas and approaches. Where available, we provide comparative analyses and we make suggestions for analyses yet to be done.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Fast and globally convergent pose estimation from video images

Chien Ping Lu; Gregory D. Hager; Eric Mjolsness

Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effectively account for the orthonormal structure of rotation matrices. We show that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space. Using object space collinearity error, we derive an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent. Experimentally, we show that the method is computationally efficient, that it is no less accurate than the best currently employed optimization methods, and that it outperforms all tested methods in robustness to outliers.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Probabilistic data association methods for tracking complex visual objects

Christopher Rasmussen; Gregory D. Hager

We describe a framework that explicitly reasons about data association to improve tracking performance in many difficult visual environments. A hierarchy of tracking strategies results from ascribing ambiguous or missing data to: 1) noise-like visual occurrences, 2) persistent, known scene elements (i.e., other tracked objects), or 3) persistent, unknown scene elements. First, we introduce a randomized tracking algorithm adapted from an existing probabilistic data association filter (PDAF) that is resistant to clutter and follows agile motion. The algorithm is applied to three different tracking modalities-homogeneous regions, textured regions, and snakes-and extensibly defined for straightforward inclusion of other methods. Second, we add the capacity to track multiple objects by adapting to vision a joint PDAF which oversees correspondence choices between same-modality trackers and image features. We then derive a related technique that allows mixed tracker modalities and handles object overlaps robustly. Finally, we represent complex objects as conjunctions of cues that are diverse both geometrically (e.g., parts) and qualitatively (e.g., attributes). Rigid and hinge constraints between part trackers and multiple descriptive attributes for individual parts render the whole object more distinctive, reducing susceptibility to mistracking. Results are given for diverse objects such as people, microscopic cells, and chess pieces.


computer vision and pattern recognition | 2009

Histograms of oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for the recognition of human actions

Rizwan Chaudhry; Avinash Ravichandran; Gregory D. Hager; René Vidal

System theoretic approaches to action recognition model the dynamics of a scene with linear dynamical systems (LDSs) and perform classification using metrics on the space of LDSs, e.g. Binet-Cauchy kernels. However, such approaches are only applicable to time series data living in a Euclidean space, e.g. joint trajectories extracted from motion capture data or feature point trajectories extracted from video. Much of the success of recent object recognition techniques relies on the use of more complex feature descriptors, such as SIFT descriptors or HOG descriptors, which are essentially histograms. Since histograms live in a non-Euclidean space, we can no longer model their temporal evolution with LDSs, nor can we classify them using a metric for LDSs. In this paper, we propose to represent each frame of a video using a histogram of oriented optical flow (HOOF) and to recognize human actions by classifying HOOF time-series. For this purpose, we propose a generalization of the Binet-Cauchy kernels to nonlinear dynamical systems (NLDS) whose output lives in a non-Euclidean space, e.g. the space of histograms. This can be achieved by using kernels defined on the original non-Euclidean space, leading to a well-defined metric for NLDSs. We use these kernels for the classification of actions in video sequences using (HOOF) as the output of the NLDS. We evaluate our approach to recognition of human actions in several scenarios and achieve encouraging results.


Computer Vision and Image Understanding | 1998

X Vision

Gregory D. Hager; Kentaro Toyama

In the past several years, the speed of standard processors has reached the point where interesting problems requiring visual tracking can be carried out on standard workstations. However, relatively little attention has been devoted to developing visual tracking technology in its own right. In this article, we describe X Vision, a modular, portable framework for visual tracking. X Vision is designed to be a programming environment for real-time vision which provides high performance on standard workstations outfitted with a simple digitizer. X Vision consists of a small set of image-level tracking primitives, and a framework for combining tracking primitives to form complex tracking systems. Efficiency and robustness are achieved by propagating geometric and temporal constraints to the feature detection level, where image warping and specialized image processing are combined to perform feature detection quickly and robustly. Over the past several years, we have used X Vision to construct several vision-based systems. We present some of these applications as an illustration of how useful, robust tracking systems can be constructed by simple combinations of a few basic primitives combined with the appropriate task-specific constraints.


computer vision and pattern recognition | 2004

Multiple kernel tracking with SSD

Gregory D. Hager; Maneesh Dewan; Charles V. Stewart

Kernel-based objective functions optimized using the mean shift algorithm have been demonstrated as an effective means of tracking in video sequences. The resulting algorithms combine the robustness and invariance properties afforded by traditional density-based measures of image similarity, while connecting these techniques to continuous optimization algorithms. This paper demonstrates a connection between kernel-based algorithms and more traditional template tracking methods. here is a well known equivalence between the kernel-based objective function and an SSD-like measure on kernel-modulated histograms. It is shown that under suitable conditions, the SSD-like measure can be optimized using Newton-style iterations. This method of optimization is more efficient (requires fewer steps to converge) than mean shift and makes fewer assumptions on the form of the underlying kernel structure. In addition, the methods naturally extend to objective functions optimizing more elaborate parametric motion models based on multiple spatially distributed kernels. We demonstrate multi-kernel methods on a variety of examples ranging from tracking of unstructured objects in image sequences to stereo tracking of structured objects to compute full 3D spatial location.


IEEE Transactions on Robotics | 2004

Vision-assisted control for manipulation using virtual fixtures

Alessandro Bettini; Panadda Marayong; Samuel Lang; Allison M. Okamura; Gregory D. Hager

We present the design and implementation of a vision-based system for cooperative manipulation at millimeter to micrometer scales. The system is based on an admittance control algorithm that implements a broad class of guidance modes called virtual fixtures. A virtual fixture, like a real fixture, limits the motion of a tool to a prescribed class or range of motions. We describe how both hard (unyielding) and soft (yielding) virtual fixtures can be implemented in this control framework. We then detail the construction of virtual fixtures for point positioning and curve following as well as extensions of these to tubes, cones, and sequences thereof. We also describe an implemented system using the JHU Steady Hand Robot. The system uses computer vision as a sensor for providing a reference trajectory, and the virtual fixture control algorithm then provides haptic feedback to implemented direct, shared manipulation. We provide extensive experimental results detailing both system performance and the effects of virtual fixtures on human speed and accuracy.


computer vision and pattern recognition | 1996

Real-time tracking of image regions with changes in geometry and illumination

Gregory D. Hager; Peter N. Belhumeur

Historically, SSD or correlation-based visual tracking algorithms have been sensitive to changes in illumination and shading across the target region. This paper describes methods for implementing SSD tracking that is both insensitive to illumination variations and computationally efficient. We first describe a vector-space formulation of the tracking problem, showing how to recover geometric deformations. We then show that the same vector space formulation can be used to account for changes in illumination. We combine geometry and illumination into an algorithm that tracks large image regions on live video sequences using no more computation than would be required to trade with no accommodation for illumination changes. We present experimental results which compare the performance of SSD tracking with and without illumination compensation.


european conference on computer vision | 2010

Adaptive and generic corner detection based on the accelerated segment test

Elmar Mair; Gregory D. Hager; Darius Burschka; Michael Suppa; Gerhard Hirzinger

The efficient detection of interesting features is a crucial step for various tasks in Computer Vision. Corners are favored cues due to their two dimensional constraint and fast algorithms to detect them. Recently, a novel corner detection approach, FAST, has been presentedwhich outperforms previous algorithms in both computational performance and repeatability. We will show how the accelerated segment test, which underlies FAST, can be significantly improved by making it more generic while increasing its performance.We do so by finding the optimal decision tree in an extended configuration space, and demonstrating how specialized trees can be combined to yield an adaptive and generic accelerated segment test. The resulting method provides high performance for arbitrary environments and so unlike FAST does not have to be adapted to a specific scene structure. We will also discuss how different test patterns affect the corner response of the accelerated segment test.

Collaboration


Dive into the Gregory D. Hager's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emad M. Boctor

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masaru Ishii

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rajesh Kumar

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Austin Reiter

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge