Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Khurram Shafique is active.

Publication


Featured researches published by Khurram Shafique.


ieee workshop on motion and video computing | 2002

A hierarchical approach to robust background subtraction using color and gradient information

Omar Javed; Khurram Shafique; Mubarak Shah

We present a background subtraction method that uses multiple cues to detect objects robustly in adverse conditions. The algorithm consists of three distinct levels, i.e., pixel level, region level and frame level. At the pixel level, statistical models of gradients and color are separately used to classify each pixel as belonging to background or foreground. In the region level, foreground pixels obtained from the color based subtraction are grouped into regions and gradient based subtraction is then used to make inferences about the validity of these regions. Pixel based models are updated based on decisions made at the region level. Finally, frame level analysis is performed to detect global illumination changes. Our method provides the solution to some of the common problems that are not addressed by most background subtraction algorithms, such as fast illumination changes, repositioning of static background objects, and initialization of background model with moving objects present in the scene.


computer vision and pattern recognition | 2005

Appearance modeling for tracking in multiple non-overlapping cameras

Omar Javed; Khurram Shafique; Mubarak Shah

When viewed from a system of multiple cameras with non-overlapping fields of view, the appearance of an object in one camera view is usually very different from its appearance in another camera view due to the differences in illumination, pose and camera parameters. In order to handle the change in observed colors of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace and demonstrate that this subspace can be used to compute appearance similarity. In the proposed approach, the system learns the subspace of inter-camera brightness transfer functions in a training phase during which object correspondences are assumed to be known. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework using both location and appearance cues. We evaluate the proposed method under several real world scenarios obtaining encouraging results.


Computer Vision and Image Understanding | 2008

Modeling inter-camera space-time and appearance relationships for tracking across non-overlapping views

Omar Javed; Khurram Shafique; Zeeshan Rasheed; Mubarak Shah

Tracking across cameras with non-overlapping views is a challenging problem. Firstly, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, we observe that people or vehicles tend to follow the same paths in most cases, i.e., roads, walkways, corridors etc. The proposed algorithm uses this conformity in the traversed paths to establish correspondence. The algorithm learns this conformity and hence the inter-camera relationships in the form of multivariate probability density of space-time variables (entry and exit locations, velocities, and transition times) using kernel density estimation. To handle the appearance change of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace. This subspace is learned by using probabilistic principal component analysis and used for appearance matching. The proposed approach does not require explicit inter-camera calibration, rather the system learns the camera topology and subspace of inter-camera brightness transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum likelihood (ML) estimation framework using both location and appearance cues. Experiments with real world videos are reported which validate the proposed approach.


Image and Vision Computing | 2003

Target tracking in airborne forward looking infrared imagery

Alper Yilmaz; Khurram Shafique; Mubarak Shah

Abstract In this paper, we propose a robust approach for tracking targets in forward looking infrared (FLIR) imagery taken from an airborne moving platform. First, the targets are detected using fuzzy clustering, edge fusion and local texture energy. The position and the size of the detected targets are then used to initialize the tracking algorithm. For each detected target, intensity and local standard deviation distributions are computed, and tracking is performed by computing the mean-shift vector that minimizes the distance between the kernel distribution for the target in the current frame and the model. In cases when the ego-motion of the sensor causes the target to move more than the operational limits of the tracking module, we perform a multi-resolution global motion compensation using the Gabor responses of the consecutive frames. The decision whether to compensate the sensor ego-motion is based on the distance measure computed from the likelihood of target and candidate distributions. To overcome the problems related to the changes in the target feature distributions, we automatically update the target model. Selection of the new target model is based on the same distance measure that is used for motion compensation. The experiments performed on the AMCOM FLIR data set show the robustness of the proposed method, which combines automatic model update and global motion compensation into one framework.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

A noniterative greedy algorithm for multiframe point correspondence

Khurram Shafique; Mubarak Shah

This work presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow the entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.


IEEE MultiMedia | 2007

Automated Visual Surveillance in Realistic Scenarios

Mubarak Shah; Omar Javed; Khurram Shafique

In this article, we present Knight, an automated surveillance system deployed in a variety of real-world scenarios ranging from railway security to law enforcement. We also discuss the challenges of developing surveillance systems, present some solutions implemented in Knight that overcome these challenges, and evaluate Knights performance in unconstrained environments


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Probabilistic Modeling of Scene Dynamics for Applications in Visual Surveillance

Imran Saleemi; Khurram Shafique; Mubarak Shah

We propose a novel method to model and learn the scene activity, observed by a static camera. The proposed model is very general and can be applied for solution of a variety of problems. The motion patterns of objects in the scene are modeled in the form of a multivariate nonparametric probability density function of spatiotemporal variables (object locations and transition times between them). Kernel Density Estimation is used to learn this model in a completely unsupervised fashion. Learning is accomplished by observing the trajectories of objects by a static camera over extended periods of time. It encodes the probabilistic nature of the behavior of moving objects in the scene and is useful for activity analysis applications, such as persistent tracking and anomalous motion detection. In addition, the model also captures salient scene features, such as the areas of occlusion and most likely paths. Once the model is learned, we use a unified Markov Chain Monte Carlo (MCMC)-based framework for generating the most likely paths in the scene, improving foreground detection, persistent labeling of objects during tracking, and deciding whether a given trajectory represents an anomaly to the observed motion patterns. Experiments with real-world videos are reported which validate the proposed approach.


acm multimedia | 2005

An object-based video coding framework for video sequences obtained from static cameras

Asaad Hakeem; Khurram Shafique; Mubarak Shah

This paper presents a novel object-based video coding framework for videos obtained from a static camera. As opposed to most existing methods, the proposed method does not require explicit 2D or 3D models of objects and hence is general enough to cater for varying types of objects in the scene. The proposed system detects and tracks objects in the scene and learns the appearance model of each object online using incremental principal component analysis (IPCA). Each object is then coded using the coefficients of the most significant principal components of its learned appearance space. Due to smooth transitions between limited number of poses of an object, usually a limited number of significant principal components contribute to most of the variance in the objects appearance space and therefore only a small number of coefficients are required to code the object. The rigid component of the objects motion is coded in terms of its affine parameters. The framework is applied to compressing videos in surveillance and video phone domains. The proposed method is evaluated on videos containing a variety of scenarios such as multiple objects undergoing occlusion, splitting, merging, entering and exiting, as well as a changing background. Results on standard MPEG-7 videos are also presented. For all the videos, the proposed method displays higher Peak Signal to Noise Ratio (PSNR) compared to MPEG-2 and MPEG-4 methods, and provides comparable or better compression.


international conference on image processing | 2004

Estimation of the radiometric response functions of a color camera from differently illuminated images

Khurram Shafique; Mubarak Shah

The mapping that relates the image irradiance to the image brightness (intensity) is known as the Radiometric Response Function or Camera Response Function. This usually unknown mapping is nonlinear and varies from one color channel to another. In this paper, we present a method to estimate the radiometric response functions (of R, G and B channels) of a color camera directly from the images of an arbitrary scene taken under different illumination conditions (The illumination conditions are not assumed to be known). The response function of a channel is modeled as a gamma curve and is recovered by using a constrained nonlinear minimization approach by exploiting the fact that the material properties of the scene remain constant in all the images. The performance of the proposed method is demonstrated experimentally.


Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense III | 2004

Visual Monitoring of Railroad Grade Crossing

Yaser Sheikh; Yun Zhai; Khurram Shafique; Mubarak Shah

There are approximately 261,000 rail crossings in the United States according to the studies by the National Highway Traffic Safety Administration (NHTSA) and Federal Railroad Administration (FRA). From 1993 to 1998, there were over 25,000 highway-rail crossing incidents involving motor vehicles - averaging 4,167 incidents a year. In this paper, we present a real-time computer vision system for the monitoring of the movement of pedestrians, bikers, animals and vehicles at railroad intersections. The video is processed for the detection of uncharacteristic events, triggering an immediate warning system. In order to recognize the events, the system first performs robust object detection and tracking. Next, a classification algorithm is used to determine whether the detected object is a pedestrian, biker, group or a vehicle, allowing inferences on whether the behavior of the object is characteristic or not. Due to the ubiquity of low cost, low power, and high quality video cameras, increased computing power and memory capacity, the proposed approach provides a cost effective and scalable solution to this important problem. Furthermore, the system has the potential to significantly decrease the number of accidents and therefore the resulting deaths and injuries that occur at railroad crossings. We have field tested our system at two sites, a rail-highway grade crossing, and a trestle located in Central Florida, and we present results on six hours of collected data.

Collaboration


Dive into the Khurram Shafique's collaboration.

Top Co-Authors

Avatar

Mubarak Shah

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald D. Dutton

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Zeeshan Rasheed

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Asaad Hakeem

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Mun Wai Lee

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yun Zhai

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge