Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Omar Javed is active.

Publication


Featured researches published by Omar Javed.


ieee workshop on motion and video computing | 2002

A hierarchical approach to robust background subtraction using color and gradient information

Omar Javed; Khurram Shafique; Mubarak Shah

We present a background subtraction method that uses multiple cues to detect objects robustly in adverse conditions. The algorithm consists of three distinct levels, i.e., pixel level, region level and frame level. At the pixel level, statistical models of gradients and color are separately used to classify each pixel as belonging to background or foreground. In the region level, foreground pixels obtained from the color based subtraction are grouped into regions and gradient based subtraction is then used to make inferences about the validity of these regions. Pixel based models are updated based on decisions made at the region level. Finally, frame level analysis is performed to detect global illumination changes. Our method provides the solution to some of the common problems that are not addressed by most background subtraction algorithms, such as fast illumination changes, repositioning of static background objects, and initialization of background model with moving objects present in the scene.


computer vision and pattern recognition | 2005

Appearance modeling for tracking in multiple non-overlapping cameras

Omar Javed; Khurram Shafique; Mubarak Shah

When viewed from a system of multiple cameras with non-overlapping fields of view, the appearance of an object in one camera view is usually very different from its appearance in another camera view due to the differences in illumination, pose and camera parameters. In order to handle the change in observed colors of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace and demonstrate that this subspace can be used to compute appearance similarity. In the proposed approach, the system learns the subspace of inter-camera brightness transfer functions in a training phase during which object correspondences are assumed to be known. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework using both location and appearance cues. We evaluate the proposed method under several real world scenarios obtaining encouraging results.


Computer Vision and Image Understanding | 2008

Modeling inter-camera space-time and appearance relationships for tracking across non-overlapping views

Omar Javed; Khurram Shafique; Zeeshan Rasheed; Mubarak Shah

Tracking across cameras with non-overlapping views is a challenging problem. Firstly, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, we observe that people or vehicles tend to follow the same paths in most cases, i.e., roads, walkways, corridors etc. The proposed algorithm uses this conformity in the traversed paths to establish correspondence. The algorithm learns this conformity and hence the inter-camera relationships in the form of multivariate probability density of space-time variables (entry and exit locations, velocities, and transition times) using kernel density estimation. To handle the appearance change of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace. This subspace is learned by using probabilistic principal component analysis and used for appearance matching. The proposed approach does not require explicit inter-camera calibration, rather the system learns the camera topology and subspace of inter-camera brightness transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum likelihood (ML) estimation framework using both location and appearance cues. Experiments with real world videos are reported which validate the proposed approach.


international conference on computer vision | 2009

Background Subtraction for Freely Moving Cameras

Yaser Sheikh; Omar Javed; Takeo Kanade

Background subtraction algorithms define the background as parts of a scene that are at rest. Traditionally, these algorithms assume a stationary camera, and identify moving objects by detecting areas in a video that change over time. In this paper, we extend the concept of ‘subtracting’ areas at rest to apply to video captured from a freely moving camera. We do not assume that the background is well-approximated by a plane or that the camera center remains stationary during motion. The method operates entirely using 2D image measurements without requiring an explicit 3D reconstruction of the scene. A sparse model of background is built by robustly estimating a compact trajectory basis from trajectories of salient features across the video, and the background is ‘subtracted’ by removing trajectories that lie within the space spanned by the basis. Foreground and background appearance models are then built, and an optimal pixel-wise foreground/background labeling is obtained by efficiently maximizing a posterior function.


international conference on pattern recognition | 2004

Multi feature path modeling for video surveillance

Imran N. Junejo; Omar Javed; Mubarak Shah

This paper proposes a novel method for detecting nonconforming trajectories of objects as they pass through a scene. Existing methods mostly use spatial features to solve this problem. Using only spatial information is not adequate; we need to take into consideration velocity and curvature information of a trajectory along with the spatial information for an elegant solution. Our method has the ability to distinguish between objects traversing spatially dissimilar paths, or objects traversing spatially proximal paths but having different spatio-temporal characteristics. The method consists of a path building training phase and a testing phase. During the training phase, we use graph-cuts for clustering the trajectories, where the Hausdorff distance metric is used to calculate the edge weights. Each cluster represents a path. An envelope boundary and an average trajectory are computed for each path. During the testing phase we use three features for trajectory matching in a hierarchical fashion. The first feature measures the spatial similarity while the second feature compares the velocity characteristics of trajectories. Finally, the curvature features capture discontinuities in velocity, acceleration, and position of the trajectory. We use real-world pedestrian sequences to demonstrate the practicality of our method.


computer vision and pattern recognition | 2005

Online detection and classification of moving objects using progressively improving detectors

Omar Javed; Saad Ali; Mubarak Shah

Boosting based detection methods have successfully been used for robust detection of faces and pedestrians. However, a very large amount of labeled examples are required for training such a classifier. Moreover, once trained, the boosted classifier cannot adjust to the particular scenario in which it is employed. In this paper, we propose a co-training based approach to continuously label incoming data and use it for online update of the boosted classifier that was initially trained from a small labeled example set. The main contribution of our approach is that it is an online procedure in which separate views (features) of the data are used for co-training, while the combined view (all features) is used to make classification decisions in a single boosted framework. The features used for classification are derived from principal component analysis of the appearance templates of the training examples. In order to speed up the classification, background modeling is used to prune away stationary regions in an image. Our experiments indicate that starting from a classifier trained on a small training set, significant performance gains can be made through online updation from the unlabeled data.


IEEE MultiMedia | 2007

Automated Visual Surveillance in Realistic Scenarios

Mubarak Shah; Omar Javed; Khurram Shafique

In this article, we present Knight, an automated surveillance system deployed in a variety of real-world scenarios ranging from railway security to law enforcement. We also discuss the challenges of developing surveillance systems, present some solutions implemented in Knight that overcome these challenges, and evaluate Knights performance in unconstrained environments


international conference on computer vision | 2001

Human tracking in multiple cameras

Sohaib Khan; Omar Javed; Zeeshan Rasheed; Mubarak Shah

Multiple cameras are needed to cover large environments for monitoring activity. To track people successfully in multiple perspective imagery, one needs to establish correspondence between objects captured in multiple cameras. We present a system for tracking people in multiple uncalibrated cameras. The system is able to discover spatial relationships between the camera fields of view and use this information to correspond between different perspective views of the same person. We employ the novel approach of finding the limits of field of view (FOV) of a camera as visible in the other cameras. Using this information, when a person is seen in one camera, we are able to predict all the other cameras in which this person will be visible. Moreover, we apply the FOV constraint to disambiguate between possible candidates of correspondence. We present results on sequences of up to three cameras with multiple people. The proposed approach is very fast compared to camera calibration based approaches.


international conference on multimedia and expo | 2003

KNIGHT/spl trade/: a real time surveillance system for multiple and non-overlapping cameras

Omar Javed; Zeeshan Rasheed; Orkun Alatas; Mubarak Shah

In this paper, we present a wide area surveillance system that detects, tracks and classifies moving objects across multiple cameras. At the single camera level, tracking is performed using a voting based approach that utilizes color and shape cues to establish correspondence. The system uses the single camera tracking results along with the relationship between camera field of view (FOV) boundaries to establish correspondence between views of the same object in multiple cameras. To this end, a novel approach is described to find the relationships between the FOV lines of cameras. The proposed approach combines tracking in cameras with overlapping and/or non-overlapping FOVs in a unified framework, without requiring explicit calibration. The proposed algorithm has been implemented in a real time system. The system uses a client-server architecture and runs at 10 Hz with three cameras.


international conference on computer vision | 2001

A framework for segmentation of talk and game shows

Omar Javed; Zeeshan Rasheed; Mubarak Shah

In this paper, we present a method to remove commercials from talk and game show videos and to segment these videos into host and guest shots. In our approach, we mainly rely on information contained in shot transitions, rather than analyzing the scene content of individual frames. We utilize the inherent differences in scene structure of commercials and talk shows to differentiate between them. Similarly, we make use of the well-defined structure of talk shows, which can be exploited to classify shots as host or guest shots. The entire show is first segmented into camera shots based on color histogram. Then, we construct a data-structure (shot connectivity graph) which links similar shots over time. Analysis of the shot connectivity graph helps us to automatically separate commercials from program segments. This is done by first detecting stories, and then assigning a weight to each story based on its likelihood of being a commercial. Further analysis on stories is done to distinguish shots of the hosts from shots of the guests. We have tested our approach on several full-length shows (including commercials) and have achieved video segmentation with high accuracy. The whole scheme is fast and works even on low quality video (160/spl times/120 pixel images at 5 Hz).

Collaboration


Dive into the Omar Javed's collaboration.

Top Co-Authors

Avatar

Mubarak Shah

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Zeeshan Rasheed

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Khurram Shafique

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Orkun Alatas

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Sohaib Khan

Lahore University of Management Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Asaad Hakeem

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Gerald Friedland

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge