Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Allan D. Jepson is active.

Publication


Featured researches published by Allan D. Jepson.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Robust online appearance models for visual tracking

Allan D. Jepson; David J. Fleet; Thomas F. El-Maraghi

We propose a framework for learning robust, adaptive, appearance models to be used for motion-based tracking of natural objects. The model adapts to slowly changing appearance, and it maintains a natural measure of the stability of the observed image structure during tracking. By identifying stable properties of appearance, we can weight them more heavily for motion estimation, while less stable properties can be proportionately downweighted. The appearance model involves a mixture of stable image structure, learned over long time courses, along with two-frame motion information and an outlier process. An online EM-algorithm is used to adapt the appearance model parameters over time. An implementation of this approach is developed for an appearance model based on the filter responses from a steerable pyramid. This model is used in a motion-based tracking algorithm to provide robustness in the face of image outliers, such as those caused by occlusions, while adapting to natural changes in appearance such as those due to facial expressions or variations in 3D pose.


International Journal of Computer Vision | 1990

Computation of component image velocity from local phase information

David J. Fleet; Allan D. Jepson

We present a technique for the computation of 2D component velocity from image sequences. Initially, the image sequence is represented by a family of spatiotemporal velocity-tuned linear filters. Component velocity, computed from spatiotemporal responses of identically tuned filters, is expressed in terms of the local first-order behavior of surfaces of constant phase. Justification for this definition is discussed from the perspectives of both 2D image translation and deviations from translation that are typical in perspective projections of 3D scenes. The resulting technique is predominantly linear, efficient, and suitable for parallel processing. Moreover, it is local in space-time, robust with respect to noise, and permits multiple estimates within a single neighborhood. Promising quantiative results are reported from experiments with realistic image sequences, including cases with sizeable perspective deformation.


european conference on computer vision | 1996

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

Michael J. Black; Allan D. Jepson

This paper describes an approach for tracking rigid and articulated objects using a view-based representation. The approach builds on and extends work on eigenspace representations, robust estimation techniques, and parameterized optical flow estimation. First, we note that the least-squares image reconstruction of standard eigenspace techniques has a number of problems and we reformulate the reconstruction problem as one of robust estimation. Second we define a “subspace constancy assumption” that allows us to exploit techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image. To account for large affine transformations between the eigenspace and the image we define a multi-scale eigenspace representation and a coarse-to-fine matching strategy. Finally, we use these techniques to track objects over long image sequences in which the objects simultaneously undergo both affine image motions and changes of view. In particular we use this “EigenTracking” technique to track and recognize the gestures of a moving hand.


International Journal of Computer Vision | 1992

Subspace methods for recovering rigid motion I: algorithm and implementation

David J. Heeger; Allan D. Jepson

As an observer moves and explores the environment, the visual stimulation in his/her eye is constantly changing. Somehow he/she is able to perceive the spatial layout of the scene, and to discern his/her movement through space. Computational vision researchers have been trying to solve this problem for a number of years with only limited success. It is a difficult problem to solve because the optical flow field is nonlinearly related to the 3D motion and depth parameters.Here, we show that the nonlinear equation describing the optical flow field can be split by an exact algebraic manipulation to form three sets of equations. The first set relates the flow field to only the translational component of 3D motion. Thus, depth and rotation need not be known or estimated prior to solving for translation. Once the translation has been recovered, the second set of equations can be used to solve for rotation. Finally, depth can be estimated with the third set of equations, given the recovered translation and rotation.The algorithm applies to the general case of arbitrary motion with respect to an arbitrary scene. It is simple to compute, and it is plausible biologically. The results reported in this article demonstrate the potential of our new approach, and show that it performs favorably when compared with two other well-known algorithms.


Cvgip: Image Understanding | 1991

Phase-based disparity measurement

David J. Fleet; Allan D. Jepson; Michael Jenkin

Abstract The measurement of image disparity is a fundamental precursor to binocular depth estimation. Recently, Jenkin and Jepson (in Computational Processes in Human Vision (V. Pylyshyn, Ed.), Ablex, New Jersey, 1988) and Sanger (Biol. Cybernet, 59, 1988 , 405–418) described promising methods based on the output phase behavior of bandpass Gabor filters. Here we discuss further justification for such techniques based on the stability of bandpass phase behavior as a function of typical distortions that exist between left and right views. In addition, despite this general stability, we show that phase signals are occasionally very sensitive to spatial position and to variations in scale, in which cases incorrect measurements occur. We find that the primary cause for this instability is the existence of singularities in phase signals. With the aid of the local frequency of the filter output (provided by the phase derivative) and the local amplitude information, the regions of phase instability near the singularities are detected so that potentially incorrect measurements can be identified. In addition, we show how the local frequency can be used away from the singularity neighbourhoods to improve the accuracy of the disparity estimates. Some experimental results are reported.


computer vision and pattern recognition | 1993

Mixture models for optical flow computation

Allan D. Jepson; Michael J. Black

The computation of optical flow relies on merging information available over an image patch to form an estimate of 2-D image velocity at a point. This merging process raises many issues. These include the treatment of outliers in component velocity measurements and the modeling of multiple motions within a patch which arise from occlusion boundaries or transparency. A new approach for dealing with these issues is presented. It is based on the use of a probabilistic mixture model to explicitly represent multiple motions within a patch. A simple extension of the EM-algorithm is used to compute a maximum likelihood estimate for the various motion parameters. Preliminary experiments indicate that this approach is computationally efficient, and that it can provide robust estimates of the optical flow values in the presence of outliers and multiple motions.<<ETX>>


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1993

Stability of phase information

David J. Fleet; Allan D. Jepson

This paper concerns the robustness of local phase information for measuring image velocity and binocular disparity. It addresses the dependence of phase behavior on the initial filters as well as the image variations that exist between different views of a 3D scene. We are particularly interested in the stability of phase with respect to geometric deformations, and its linearity as a function of spatial position. These properties are important to the use of phase information, and are shown to depend on the form of the filters as well as their frequency bandwidths. Phase instabilities are also discussed using the model of phase singularities described by Jepson and Fleet. In addition to phase-based methods, these results are directly relevant to differential optical flow methods and zero-crossing tracking. >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

Estimating optical flow in segmented images using variable-order parametric models with local deformations

Michael J. Black; Allan D. Jepson

This paper presents a new model for estimating optical flow based on the motion of planar regions plus local deformations. The approach exploits brightness information to organize and constrain the interpretation of the motion by using segmented regions of piecewise smooth brightness to hypothesize planar regions in the scene. Parametric flow models are estimated in these regions in a two step process which first computes a coarse fit and then estimates the appropriate parametrization of the motion of the region. The initial fit is refined using a generalization of the standard area-based regression approaches. Since the assumption of planarity is likely to be violated, we allow local deformations from the planar assumption in the same spirit as physically-based approaches which model shape using coarse parametric models plus local deformations. This parametric plus deformation model exploits the strong constraints of parametric approaches while retaining the adaptive nature of regularization approaches. Experimental results on a variety of images model produces accurate flow estimates while the incorporation of brightness segmentation boundaries.


international conference on machine learning | 2004

Generative modeling for continuous non-linearly embedded visual inference

Cristian Sminchisescu; Allan D. Jepson

Many difficult visual perception problems, like 3D human motion estimation, can be formulated in terms of inference using complex generative models, defined over high-dimensional state spaces. Despite progress, optimizing such models is difficult because prior knowledge cannot be flexibly integrated in order to reshape an initially designed representation space. Nonlinearities, inherent sparsity of high-dimensional training sets, and lack of global continuity makes dimensionality reduction challenging and low-dimensional search inefficient. To address these problems, we present a learning and inference algorithm that restricts visual tracking to automatically extracted, non-linearly embedded, low-dimensional spaces. This formulation produces a layered generative model with reduced state representation, that can be estimated using efficient continuous optimization methods. Our prior flattening method allows a simple analytic treatment of low-dimensional intrinsic curvature constraints, and allows consistent interpolation operations. We analyze reduced manifolds for human interaction activities, and demonstrate that the algorithm learns continuous generative models that are useful for tracking and for the reconstruction of 3D human motion in monocular video.


Journal of The Optical Society of America A-optics Image Science and Vision | 1986

Ambient illumination and the determination of material changes

Ron Gershon; Allan D. Jepson; John K. Tsotsos

The task of distinguishing material changes from shadow boundaries in chromatic images is discussed. Although there have been previous attempts at providing solutions to this problem, the assumptions that were adopted were too restrictive. Using a simple reflection model, we show that the ambient illumination cannot be assumed to have the same spectral characteristics as the incident illumination, since it may lead to the classification of shadow boundaries as material changes. In such cases, we show that it is necessary to take into account the spectral properties of the ambient illumination in order to develop a technique that is more robust and stable than previous techniques. This technique uses a biologically motivated model of color vision and, in particular, a set of chromatic-opponent and double-opponent center-surround operators. We apply this technique to simulated test patterns as well as to a chromatic image. It is shown that, given some knowledge about the strength of the ambient illumination, this method provides a better classification of shadow boundaries and material changes.

Collaboration


Dive into the Allan D. Jepson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David J. Heeger

Center for Neural Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge