Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zulfiqar Hassan Khan is active.

Publication


Featured researches published by Zulfiqar Hassan Khan.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Robust Visual Object Tracking Using Multi-Mode Anisotropic Mean Shift and Particle Filters

Zulfiqar Hassan Khan; Irene Yu-Hua Gu; Andrew G. Backhouse

This paper addresses issues in object tracking where videos contain complex scenarios. We propose a novel tracking scheme that jointly employs particle filters and multi-mode anisotropic mean shift. The tracker estimates the dynamic shape and appearance of objects, and also performs online learning of reference object. Several partition prototypes and fully tunable parameters are applied to the rectangular object bounding box for improving the estimates of shape and multiple appearance modes in the object. The main contributions of the proposed scheme include: 1) use a novel approach for online learning of reference object distributions; 2) use a five parameter set (2-D central location, width, height, and orientation) of rectangular bounding box as tunable variables in the joint tracking scheme; 3) derive the multi-mode anisotropic mean shift related to a partitioned rectangular bounding box and several partition prototypes; and 4) relate the bounding box parameter computation with the multi-mode mean shift estimates by combining eigen decomposition, geometry of subareas, and weighted average. This has led to more accurate and efficient tracking where only small number of particles (<;20) is required. Experiments have been conducted for a range of videos captured by a dynamic or stationary camera, where the target object may experience long-term partial occlusions, intersections with other objects with similar color distributions, deformable object accompanied with shape, pose or abrupt motion speed changes, and cluttered background. Comparisons with existing methods and performance evaluations are also performed. Test results have shown marked improvement of the proposed method in terms of robustness to occlusions, tracking drifts and tightness and accuracy of tracked bounding box. Limitations of the method are also mentioned.


IEEE Transactions on Information Forensics and Security | 2010

Joint Feature Correspondences and Appearance Similarity for Robust Visual Object Tracking

Zulfiqar Hassan Khan; Irene Yu-Hua Gu

A novel visual object tracking scheme is proposed by using joint point feature correspondences and object appearance similarity. For point feature-based tracking, we propose a candidate tracker that simultaneously exploits two separate sets of point feature correspondences in the foreground and in the surrounding background, where background features are exploited for the indication of occlusions. Feature points in these two sets are then dynamically maintained. For object appearance-based tracking, we propose a candidate tracker based on an enhanced anisotropic mean shift with a fully tunable (five degrees of freedom) bounding box that is partially guided by the above feature point tracker. Both candidate trackers contain a reinitialization process to reset the tracker in order to prevent accumulated tracking error propagation in frames. In addition, a novel online learning method is introduced to the enhanced mean shift-based candidate tracker. The reference object distribution is updated in each time interval if there is an indication of stable and reliable tracking without background interferences. By dynamically updating the reference object model, tracking is further improved by using a more accurate object appearance similarity measure. An optimal selection criterion is applied to the final tracker based on the results of these candidate trackers. Experiments have been conducted on several videos containing a range of complex scenarios. To evaluate the performance, the proposed scheme is further evaluated using three objective criteria, and compared with two existing trackers. All our results have shown that the proposed scheme is very robust and has yielded a marked improvement in terms of tracking drift, tightness, and accuracy of tracked bounding boxes, especially for complex video scenarios containing long-term partial occlusions or intersections, deformation, or background clutter with similar color distributions to the foreground object.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

Nonlinear Dynamic Model for Visual Object Tracking on Grassmann Manifolds With Partial Occlusion Handling

Zulfiqar Hassan Khan; Irene Yu-Hua Gu

This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.


international conference on multimedia and expo | 2009

Joint anisotropic mean shift and consensus point feature correspondences for object tracking in video

Zulfiqar Hassan Khan; Irene Yu-Hua Gu; Tiesheng Wang; Andrew G. Backhouse

We propose a novel tracking scheme that jointly employs point feature correspondences and object appearance similarity. For selecting point correspondences, we use a subset of scaleinvariant point features from SIFT that agree with a pre-defined affine transformation. The selected consensus points are then used for pre-selecting candidate regions. For appearance similarity based tracking, we employ an existing anisotropic mean shift, from which the formula for estimating bounding box parameters (width, height, orientation and center) are derived. A switching criterion is utilized to handle the situation where only a small number of point correspondences is found. Experiments and evaluation are performed on tracking moving objects on videos where objects may contain partial occlusions, intersection, deformation and pose changes among other transforms. Our comparisons with two existing methods have shown that the proposed scheme has yielded marked improvement, especially in terms of reducing tracking drifts, of robustness to occlusions, and of tightness and accuracy of tracked bounding box.


international conference on image processing | 2009

Joint particle filters and multi-mode anisotropic mean shift for robust tracking of video objects with partitioned areas

Zulfiqar Hassan Khan; Irene Yu-Hua Gu; Andrew G. Backhouse

We propose a novel scheme that jointly employs anisotropic mean shift and particle filters for tracking moving objects from video. The proposed anisotropic mean shift, that is applied to partitioned areas in a candidate object bounding box whose parameters (center, width, height and orientation) are adjusted during the mean shift iterations, seeks multiple local modes in spatial-kernel weighted color histograms. By using a Gaussian distributed Bhattacharyya distance as the likelihood and mean shift updated parameters as the state vector, particle filters become more efficient in terms of tracking using a small number of particles (≪20). The combined scheme is able to maintain the merits of both methods. Experiments conducted on videos containing deformable objects with long-term partial occlusions and intersections have shown robust tracking performance. Comparisons with two existing methods have been made which showed marked improvement in terms of robustness to occlusions, tightness and accuracy of tracked box, and tracking drift.


international conference on computer vision | 2011

Bayesian online learning on Riemannian manifolds using a dual model with applications to video object tracking

Zulfiqar Hassan Khan; Irene Yu-Hua Gu

This paper proposes a new Bayesian online learning method on a Riemannian manifold for video objects. The basic idea is to consider the dynamic appearance of an object as a point moving on a manifold, where a dual model is applied to estimate the posterior trajectory of this moving point at each time instant under the Bayesian framework. The dual model uses two state variables for modeling the online learning process on Riemannian manifolds: one is for object appearances on Riemannian manifolds, another is for velocity vectors in tangent planes of manifolds. The key difference of our method as compared with most existing Riemannian manifold tracking methods is to compute the Riemannian mean from a set of particle manifold points at each time instant rather than using a sliding window of manifold points at different times. Next to that, we propose to use Gabor filter outputs on partitioned sub-areas of object bounding box as features, from which the covariance matrix of object appearance is formed. As an application example, the proposed online learning is employed to a Riemannian manifold object tracking scheme where tracking and online learning are performed alternatively. Experiments are performed on both visual-band videos and infrared videos, and compared with two existing manifold trackers that are most relevant. Results have shown significant improvement in terms of tracking drift, tightness and accuracy of tracked boxes especially for objects with large pose changes.


Computer Vision and Image Understanding | 2014

Online domain-shift learning and object tracking based on nonlinear dynamic models and particle filters on Riemannian manifolds☆

Zulfiqar Hassan Khan; Irene Yu-Hua Gu

Abstract This paper proposes a novel online domain-shift appearance learning and object tracking scheme on a Riemannian manifold for visual and infrared videos, especially for video scenarios containing large deformable objects with fast out-of-plane pose changes that could be accompanied by partial occlusions. Although Riemannian manifolds and covariance descriptors are promising for visual object tracking, the use of Riemannian mean from a window of observations, spatially insensitive covariance descriptors, fast significant out-of-plane (non-planar) pose changes, and long-term partial occlusions of large-size deformable objects in video limits the performance of such trackers. The proposed method tackles these issues with the following main contributions: (a) Proposing a Bayesian formulation on Riemannian manifolds by using particle filters on the manifold and using appearance particles in each time instant for computing the Riemannian mean, rather than using a window of observations. (b) Proposing a nonlinear dynamic model for online domain-shift learning on the manifold, where the model includes both manifold object appearance and its velocity. (c) Introducing a criterion-based partial occlusion handling approach in online learning. (d) Tracking object bounding box by using affine parametric shape modeling with manifold appearance embedded. (e) Incorporating spatial, frequency and orientation information in the covariance descriptor by extracting Gabor features in a partitioned bounding box. (f) Effectively applying to both visual-band videos and thermal-infrared videos. To realize the proposed tracker, two particle filters are employed: one is applied on the Riemannian manifold for generating candidate appearance particles and another is on vector space for generating candidate box particles. Further, tracking and online learning are performed in alternation to mitigate the tracking drift. Experiments on both visual and infrared videos have shown robust tracking performance of the proposed scheme. Comparisons and evaluations with ten existing state-of-art trackers provide further support to the proposed scheme.


international conference on computer vision | 2011

Tracking visual and infrared objects using joint Riemannian manifold appearance and affine shape modeling

Zulfiqar Hassan Khan; Irene Yu-Hua Gu

This paper addresses the problem of object tracking from visual and infrared videos captured either by a dynamic or stationary camera where objects contain large pose changes. We propose a novel object tracking scheme that exploits the geometrical structure of Riemannian manifold and piecewise geodesics under a Bayesian framework. Two particle filters are alternatingly employed for tracking dynamic objects. One for online learning object appearances on Riemannian manifolds using tracked candidates, another for tracking object bounding box parameters with appearances on the manifold embedded. The rationale for obtaining this enhanced manifold tracker as compared with existing ones is to introduce an additional state variable, such that not only the manifold point representing the object is updated, but also the velocity of dynamic manifold point is estimated. Main contributions of the paper include: (a) propose an online appearance learning strategy by a particle filter on the manifold; (b) an object tracker that incorporates the manifold appearance for prediction under a particle filter framework; (c) use partitioned sub-regions of object bounding box that incorporates the spatial information in the appearance; (d) use Gabor features in different frequencies and orientations in partitioned sub-regions for IR (infrared) video objects. Hence, the proposed tracking scheme is applicable to both visual and IR videos. Experiments on videos where objects contain significant pose changes show very robust tracking results. The proposed scheme is also compared with two most relevant manifold tracking methods, results have shown much improved tracking performance in terms of tracking drift and tightness and accuracy of tracked boxes.


international conference on image processing | 2011

Visual trackingand dynamic learning on the Grassmann manifold with inference from a Bayesian framework and state space models

Zulfiqar Hassan Khan; Irene Yu-Hua Gu

We propose a novel visual tracking scheme that exploits both the geometrical structure of Grassmann manifold and piece-wise geodesics under a Bayesian framework. Two particle filters are alternatingly employed on the manifold. One is used for online updating the appearance subspace on the manifold using sliding-window observations, and the other is for tracking moving objects on the manifold based on the dynamic shape and appearance models. Main contributions of the paper include: (a) proposing an online manifold learning strategy by a particle filter, where a mixture of dynamic models is used for both the changes of manifold bases in the tangent plane and the piecewise geodesics on the manifold, (b) proposing a manifold object tracker by incorporating object shape in the tangent plane and the manifold prediction error of object appearance jointly in a particle filter framework. Experiments performed on videos containing significant object pose changes show very robust tracking results. The proposed scheme also shows better performance as comparing with three existing trackers in terms of tracking drift and the tightness and accuracy of tracked boxes.


pacific rim conference on multimedia | 2009

Robust Object Tracking Using Particle Filters and Multi-region Mean Shift

Andrew G. Backhouse; Zulfiqar Hassan Khan; Irene Yu-Hua Gu

In this paper, we introduce a novel algorithm which builds upon the combined anisotropic mean-shift and particle filter framework. The anisotropic mean-shift [4] with 5 degrees of freedom, is extended to work on a partition of the object into concentric rings. This adds spatial information to the description of the object which makes the algorithm more resilient to occlusion and less likely to mistake the object with other objects having similar color densities. Experiments conducted on videos containing deformable objects with long-term partial occlusion (or, short-term full occlusion) and intersection have shown robust tracking performance, especially in tracking objects with long term partial occlusion, short term full occlusion, close color background clutter, severe object deformation and fast changing motion. Comparisons with two existing methods have shown marked improvement in terms of robustness to occlusions, tightness and accuracy of tracked box, and tracking drifts.

Collaboration


Dive into the Zulfiqar Hassan Khan's collaboration.

Top Co-Authors

Avatar

Irene Yu-Hua Gu

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew G. Backhouse

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tiesheng Wang

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge