Faisal Z. Qureshi
University of Ontario Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Faisal Z. Qureshi.
computer vision and pattern recognition | 2007
Faisal Z. Qureshi; Demetri Terzopoulos
This paper advocates a virtual vision paradigm and demonstrates its usefulness in camera sensor network research. Virtual vision prescribes the use of a visually and behaviorally realistic virtual environment simulator in the design and evaluation of surveillance systems. Impediments to deploying and experimenting with appropriately complex camera networks makes virtual vision an attractive alternative for many vision researchers who are motivated to investigate high level multi-camera control issues within such networks. In particular, we present two prototype surveillance systems comprising passive and active pan/tilt/zoom cameras. We deploy these systems in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras situated throughout this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring extensive public spaces. Our novel multi-camera control strategies enable the cameras to collaborate in persistently observing pedestrians of interest that move across their fields of view and in capturing close-up videos of pedestrians as they travel through designated areas. The sensor networks support task-dependent camera node selection and aggregation through local decision-making and inter-node communication. Our approach to multi-camera control is robust to node failures and message loss.
Multimedia Systems | 2006
Faisal Z. Qureshi; Demetri Terzopoulos
We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras, which automatically captures high-resolution videos of pedestrians as they move through a designated area. A wide-FOV static camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of one pedestrian at a time. We formulate the multi-camera control strategy as an online scheduling problem and propose a solution that combines the information gathered by the wide-FOV cameras with weighted round-robin scheduling to guide the available PTZ cameras, such that each pedestrian is observed by at least one PTZ camera while in the designated area. A centerpiece of our work is the development and testing of experimental surveillance systems within a visually and behaviorally realistic virtual environment simulator. The simulator is valuable as our research would be more or less infeasible in the real world given the impediments to deploying and experimenting with appropriately complex camera sensor networks in large public spaces. In particular, we demonstrate our surveillance system in a virtual train station environment populated by autonomous, lifelike virtual pedestrians, wherein easily reconfigurable virtual cameras generate synthetic video feeds. The video streams emulate those generated by real surveillance cameras monitoring richly populated public spaces.
international conference on distributed smart cameras | 2007
Faisal Z. Qureshi; Demetri Terzopoulos
This paper presents our research towards smart camera networks capable of carrying out advanced surveillance tasks with little or no human supervision. A unique centerpiece of our work is the combination of computer graphics, artificial life, and computer vision simulation technologies to develop such networks and experiment with them. Specifically, we demonstrate a smart camera network comprising static and active simulated video surveillance cameras that provides extensive coverage of a large virtual public space, a train station populated by autonomously self-animating virtual pedestrians. The realistically simulated network of smart cameras performs persistent visual surveillance of individual pedestrians with minimal intervention. Our innovative camera control strategy naturally addresses camera aggregation and handoff, is robust against camera and communication failures, and requires no camera calibration, detailed world model, or central controller.
international conference on distributed smart cameras | 2009
Faisal Z. Qureshi; Demetri Terzopoulos
We present a visual sensor network, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras, which automatically captures high quality surveillance video of selected pedestrians during their prolonged presence in an area of interest. A wide-FOV static camera can track multiple pedestrians, while any PTZ active camera can follow a single pedestrian at a time. The proactive control of multiple PTZ cameras is required to record seamless, high quality video of a roaming individual despite the observational constraints of the different cameras. We formulate PTZ camera assignment and handoff as a planning problem whose solution achieves optimal camera assignment with respect to predefined observational goals.
international conference on computer communications and networks | 2005
Faisal Z. Qureshi; Demetri Terzopoulos
The goals of this paper are two-fold: (i) to present our initial efforts towards the realization of a fully autonomous sensor network of dynamic video cameras capable of providing perceptive coverage of a large public space, and (ii) to further the cause of exploiting visually and behaviorally realistic virtual environments in the development and testing of machine vision systems. In particular, our proposed sensor network employs techniques that enable a collection of active (pan-tilt-zoom) cameras to collaborate in performing various visual surveillance tasks, such as keeping one or more pedestrians within view, with minimal reliance on a human operator. The network features local and global autonomy and lacks any central controller, which entails robustness and scalability. Its functionality is the result of local decision-making capabilities at each camera node and communication between the nodes. We demonstrate our surveillance system in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. Our readily reconfigurable virtual cameras generate synthetic video feeds that emulate those generated by real surveillance cameras monitoring public spaces. This type of research would be difficult in the real world given the costs of deploying and experimenting with an appropriately complex camera network in a large public space the size of a train station.
advanced video and signal based surveillance | 2009
Faisal Z. Qureshi
This paper presents a framework for preserving privacy in video surveillance. Raw video is decomposed into a background and one or more object-video streams. Object-video streams can be combined to render the scene in a variety of ways: 1) The original video can be reconstructed from object-video streams without any data loss; 2) individuals in the scene can be represented as blobs, obscuring their identities; 3) foreground objects can be color coded to convey subtle scene information to the operator, again without revealing the identities of the individuals present in the scene; 4) the scene can be partially rendered, i.e., revealing the identities of some individuals, while preserving the anonymity of others. We evaluate our approach in a virtual train station environment populated by autonomous, lifelike virtual pedestrians.
international conference industrial engineering other applications applied intelligent systems | 2011
Mukhtaj S. Barhm; Nidal Qwasmi; Faisal Z. Qureshi; Khalil El-Khatib
We propose a novel privacy aware video surveillance system. The proposed system encodes privacy preferences using P3P-APPEL framework that was first proposed for managing data privacy on the web. To this end, we have proposed extensions to P3P-APPEL to make it suitable for video surveillance applications. A noteworthy feature of the proposed system is its ability to interact with individuals present in the scene. Users with appropriate security credentials have access to one of three privacy settings: L0 (no privacy), L1 (face blur), and L2 (full body blur). User can thus choose the level of privacy (or surveillance) they are comfortable with. This is an extremely desirable capability that shifts the relationship between those who are observed and those who operate video surveillance systems.
distributed computing in sensor systems | 2007
Faisal Z. Qureshi; Demetri Terzopoulos
We propose a distributed coalition formation strategy for collaborative sensing tasks in camera sensor networks. The proposed model supports taskdependent node selection and aggregation through an announcement/bidding/ selection strategy. It resolves node assignment conflicts by solving an equivalent constraint satisfaction problem. Our technique is scalable, as it lacks any central controller, and it is robust to node failures and imperfect communication. Another unique aspect of our work is that we advocate visually and behaviorally realistic virtual environments as a simulation tool in support of research on largescale camera sensor networks. Specifically, our visual sensor network comprises uncalibrated static and active simulated video surveillance cameras deployed in a virtual train station populated by autonomously self-animating pedestrians. The readily reconfigurable virtual cameras generate synthetic video feeds that emulate those generated by real surveillance cameras monitoring public spaces. Our simulation approach, which runs on high-end commodity PCs, has proven to be beneficial because this type of research would be difficult to carry out in the real world in view of the impediments to deploying and experimenting with an appropriately complex camera network in extensive public spaces.
industrial and engineering applications of artificial intelligence and expert systems | 2004
Faisal Z. Qureshi; Demetri Terzopoulos; Ross Gillett
One of the difficulties of using Artificial Neural Networks (ANNs) to estimate atmospheric temperature is the large number of potential input variables available. In this study, four different feature extraction methods were used to reduce the input vector to train four networks to estimate temperature at different atmospheric levels. The four techniques used were: genetic algorithms (GA), coefficient of determination (CoD), mutual information (MI) and simple neural analysis (SNA). The results demonstrate that of the four methods used for this data set, mutual information and simple neural analysis can generate networks that have a smaller input parameter set, while still maintaining a high degree of accuracy.
international conference on distributed smart cameras | 2011
Wiktor Starzyk; Faisal Z. Qureshi
This paper introduces a camera network capable of automatically learning proactive control strategies that enable a set of active pan/tilt/zoom (PTZ) cameras, supported by wide-FOV passive cameras, to provide persistent coverage of the scene. When a situation is encountered for the first time, a reasoning module performs PTZ camera assignments and handoffs. The results of this reasoning exercise are 1) generalized so as to be applicable to many other similar situations and 2) stored in a production system for later use. When a “similar” situation is encountered in the future, the production-system reacts instinctively and performs camera assignments and handoffs, bypassing the reasoning module. Over time the proposed camera network reduces its reliance on the reasoning module to perform camera assignments and handoffs, consequently becoming more responsive and computationally efficient.