Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Murtaza Taj is active.

Publication


Featured researches published by Murtaza Taj.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Efficient Multitarget Visual Tracking Using Random Finite Sets

Emilio Maggio; Murtaza Taj; Andrea Cavallaro

We propose a filtering framework for multitarget tracking that is based on the probability hypothesis density (PHD) filter and data association using graph matching. This framework can be combined with any object detectors that generate positional and dimensional information of objects of interest. The PHD filter compensates for missing detections and removes noise and clutter. Moreover, this filter reduces the growth in complexity with the number of targets from exponential to linear by propagating the first-order moment of the multitarget posterior, instead of the full posterior. In order to account for the nature of the PHD propagation, we propose a novel particle resampling strategy and we adapt dynamic and observation models to cope with varying object scales. The proposed resampling strategy allows us to use the PHD filter when a priori knowledge of the scene is not available. Moreover, the dynamic and observation models are not limited to the PHD filter and can be applied to any Bayesian tracker that can handle state-dependent variances. Extensive experimental results on a large video surveillance dataset using a standard evaluation protocol show that the proposed filtering framework improves the accuracy of the tracker, especially in cluttered scenes.


IEEE Signal Processing Magazine | 2011

Distributed and Decentralized Multicamera Tracking

Murtaza Taj; Andrea Cavallaro

We discussed emerging multicamera tracking algorithms that find their roots in signal processing, wireless sensor networks, and computer vision. Based on how cameras share estimates and fuse information, we classified these trackers as distributed, decentralized, and centralized algorithms. We also highlighted the challenges to be addressed in the design of decentralized and distributed tracking algorithms. In particular, we showed how the constraints derived from the topology of the networks and the nature of the task have favored so far decentralized architectures with multiple local fusion centers. Because of the availability of fewer fusion centers compared to distributed algorithms, decentralized algorithms can share larger amounts of data (e.g., occupancy maps) and can back-project estimates among views and fusion centers to validate results. Distributed tracking uses algorithms that can operate with smaller amounts of data at any particular node and obtain state estimates through iterative fusion. Despite recent advances, there are important issues to be addressed to achieve efficient multitarget multicamera tracking. Current algorithms either assume the track-to-measurement association information to be available for the tracker or operate on a small (known) number of targets. Algorithms performing track-to-measurement association for a time-varying number of targets with higher accuracy usually incur much higher costs, whose reduction is an important open problem to be addressed in multicamera networks.


IEEE Journal of Selected Topics in Signal Processing | 2008

Target Detection and Tracking With Heterogeneous Sensors

Huiyu Zhou; Murtaza Taj; Andrea Cavallaro

We present a multimodal detection and tracking algorithm for sensors composed of a camera mounted between two microphones. Target localization is performed on color-based change detection in the video modality and on time difference of arrival (TDOA) estimation between the two microphones in the audio modality. The TDOA is computed by multiband generalized cross correlation (GCC) analysis. The estimated directions of arrival are then postprocessed using a Riccati Kalman filter. The visual and audio estimates are finally integrated, at the likelihood level, into a particle filter (PF) that uses a zero-order motion model, and a weighted probabilistic data association (WPDA) scheme. We demonstrate that the Kalman filtering (KF) improves the accuracy of the audio source localization and that the WPDA helps to enhance the tracking performance of sensor fusion in reverberant scenarios. The combination of multiband GCC, KF, and WPDA within the particle filtering framework improves the performance of the algorithm in noisy scenarios. We also show how the proposed audiovisual tracker summarizes the observed scene by generating metadata that can be transmitted to other network nodes instead of transmitting the raw images and can be used for very low bit rate communication. Moreover, the generated metadata can also be used to detect and monitor events of interest.


international conference on distributed smart cameras | 2009

Multi-camera track-before-detect

Murtaza Taj; Andrea Cavallaro

We present a novel multi-camera multi-target fusion and tracking algorithm for noisy data. Information fusion is an important step towards robust multi-camera tracking and allows us to reduce the effect of projection and parallax errors as well as of the sensor noise. Input data from each camera view are projected on a top-view through multi-level homographic transformations. These projected planes are then collapsed onto the top-view to generate a detection volume. To increase track consistency with the generated noisy data we propose to use a track-before-detect particle filter (TBD-PF) on a 5D state-space. TBD-PF is a Bayesian method which extends the target state with the signal intensity and evaluates each image segment against the motion model. This results in filtering components belonging to noise only and enables tracking without the need of hard thresholding the signal. We demonstrate and evaluate the proposed approach on real multi-camera data from a basketball match.


CLEaR | 2006

Multi-feature graph-based object tracking

Murtaza Taj; Emilio Maggio; Andrea Cavallaro

We present an object detection and tracking algorithm that addresses the problem of multiple simultaneous targets tracking in real-world surveillance scenarios. The algorithm is based on color change detection and multi-feature graph matching. The change detector uses statistical information from each color channel to discriminate between foreground and background. Changes of global illumination, dark scenes, and cast shadows are dealt with a pre-processing and post-processing stage. Graph theory is used to find the best object paths across multiple frames using a set of weighted object features, namely color, position, direction and size. The effectiveness of the proposed algorithm and the improvements in accuracy and precision introduced by the use of multiple features are evaluated on the VACE dataset.


Multimedia Tools and Applications | 2010

Content and task-based view selection from multiple video streams

Fahad Daniyal; Murtaza Taj; Andrea Cavallaro

We present a content-aware multi-camera selection technique that uses object- and frame-level features. First objects are detected using a color-based change detector. Next trajectory information for each object is generated using multi-frame graph matching. Finally, multiple features including size and location are used to generate an object score. At frame-level, we consider total activity, event score, number of objects and cumulative object score. These features are used to generate score information using a multivariate Gaussian distribution. The algorithm. The best view is selected using a Dynamic Bayesian Network (DBN), which utilizes camera network information. DBN employs previous view information to select the current view thus increasing resilience to frequent switching. The performance of the proposed approach is demonstrated on three multi-camera setups with semi-overlapping fields of view: a basketball game, an indoor airport surveillance scenario and a synthetic outdoor pedestrian dataset. We compare the proposed view selection approach with a maximum score based camera selection criterion and demonstrate a significant decrease in camera flickering. The performance of the proposed approach is also validated through subjective testing.


international conference on acoustics, speech, and signal processing | 2007

Relative Position Estimation of Non-Overlapping Cameras

Nadeem Anjum; Murtaza Taj; Andrea Cavallaro

We present an algorithm for the estimation of the relative camera position in a network of cameras with non-overlapping fields of view. The algorithm estimates the missing trajectory information in the unobserved areas of the multi-sensor configuration using both parametric and non-parametric algorithms. First, Kalman filtering is used to estimate the trajectories in the unobserved regions. Next, linear regression estimates the position of the target based upon the motion model generated from the measured positions in the field of view of each sensor. Finally, the relative orientation of the sensors is calculated using the observed and estimated target position from adjacent cameras. We demonstrate the algorithm on both synthetic and real data.


international conference on acoustics, speech, and signal processing | 2009

Audio-assisted trajectory estimation in non-overlapping multi-camera networks

Murtaza Taj; Andrea Cavallaro

We present an algorithm to improve trajectory estimation in networks of non-overlapping cameras using audio measurements. The algorithm fuses audiovisual cues in each cameras field of view and recovers trajectories in unobserved regions using microphones only. Audio source localization is performed using Stereo Audio and Cycloptic Vision (STAC) sensor by estimating the time difference of arrival (TDOA) between microphone pair and then by computing the cross correlation. Audio estimates are then smoothed using Kalman filtering. The audio-visual fusion is performed using a dynamic weighting strategy. We show that using a multi-modal sensor with combined visual (narrow) and audio (wider) field of view can enable extended target tracking in non-overlapping camera settings. In particular, the weighting scheme improves performance in the overlapping regions. The algorithm is evaluated in several multi-sensor configurations using synthetic data and compared with state of the art algorithm.


international conference on image processing | 2007

Multi-Modal Particle Filtering Tracking using Appearance, Motion and Audio Likelihoods

Matteo Bregonzio; Murtaza Taj; Andrea Cavallaro

We propose a multi-modal object tracking algorithm that combines appearance, motion and audio information in a particle filter. The proposed tracker fuses at the likelihood level the audio-visual observations captured with a video camera coupled with two microphones. Two video likelihoods are computed that are based on a 3D color histogram appearance model and on a color change detection, whereas an audio likelihood provides information about the direction of arrival of a target. The direction of arrival is computed based on a multi-band generalized cross-correlation function enhanced with a noise suppression and reverberation filtering that uses the precedence effect. We evaluate the tracker on single and multi-modality tracking and quantify the performance improvement introduced by integrating audio and visual information in the tracking process.


international conference on distributed smart cameras | 2007

Audiovisual Tracking Using STAC Sensors

Huiyu Zhou; Murtaza Taj; Andrea Cavallaro

In this paper we present an audiovisual tracking algorithm using a stereo audio and cycloptic vision (STAC) sensor. A STAC sensor is composed of a single camera mounted between two microphones. Target localization is performed using color-based change detection in the video modality and on estimation of time difference of arrival (TDOA) between the two microphones in the audio modality. The TDOA is computed by means of a multi-band generalized cross correlation (GCC) analysis. The estimated directions of arrival are then ltered using a Riccati Kalman Iter . The visual and audio estimates are nally integrated, at likelihood level, into a particle Iter that uses a zero-order motion model. We demonstrate that the Kalman ltering improves the accuracy of the audio source localization and sensor fusion with particle ltering under image occlusions. We also demonstrate the proposed audiovisual tracker to generate metadata for very low bitrate teleconferencing.

Collaboration


Dive into the Murtaza Taj's collaboration.

Top Co-Authors

Avatar

Andrea Cavallaro

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Emilio Maggio

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Fahad Daniyal

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Ali Hassan

Lahore University of Management Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdul Rafay Khalid

Lahore University of Management Sciences

View shared research outputs
Top Co-Authors

Avatar

Reema Bajwa

Lahore University of Management Sciences

View shared research outputs
Top Co-Authors

Avatar

Sohaib Khan

Lahore University of Management Sciences

View shared research outputs
Top Co-Authors

Avatar

Syed Rizwan Gilani

Lahore University of Management Sciences

View shared research outputs
Top Co-Authors

Avatar

Gabin Kayumbi

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge