Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Osama Masoud is active.

Publication


Featured researches published by Osama Masoud.


IEEE Transactions on Intelligent Transportation Systems | 2002

Detection and classification of vehicles

Surendra Gupte; Osama Masoud; Robert F. K. Martin; Nikolaos Papanikolopoulos

This paper presents algorithms for vision-based detection and classification of vehicles in monocular image sequences of traffic scenes recorded by a stationary camera. Processing is done at three levels: raw images, region level, and vehicle level. Vehicles are modeled as rectangular patches with certain dynamic behavior. The proposed method is based on the establishment of correspondences between regions and vehicles, as the vehicles move through the image sequence. Experimental results from highway scenes are provided which demonstrate the effectiveness of the method. We also briefly describe an interactive camera calibration tool that we have developed for recovering the camera parameters using features in the image selected by the user.


vehicular technology conference | 2001

A novel method for tracking and counting pedestrians in real-time using a single camera

Osama Masoud; Nikolaos Papanikolopoulos

This paper presents a real-time system for pedestrian tracking in sequences of grayscale images acquired by a stationary camera. The objective is to integrate this system with a traffic control application such as a pedestrian control scheme at intersections. The proposed approach can also be used to detect and track humans in front of vehicles. Furthermore, the proposed schemes can be employed for the detection of several diverse traffic objects of interest (vehicles, bicycles, etc.) The system outputs the spatio-temporal coordinates of each pedestrian during the period the pedestrian is in the scene. Processing is done at three levels: raw images, blobs, and pedestrians. Blob tracking is modeled as a graph optimization problem. Pedestrians are modeled as rectangular patches with a certain dynamic behavior. Kalman filtering is used to estimate pedestrian parameters. The system was implemented on a Datacube MaxVideo 20 equipped with a Datacube Max860 and was able to achieve a peak performance of over 30 frames per second. Experimental results based on indoor and outdoor scenes demonstrated the system s robustness under many difficult situations such as partial or full occlusions of pedestrians.


international conference on intelligent transportation systems | 2003

Computer vision algorithms for intersection monitoring

Harini Veeraraghavan; Osama Masoud; Nikolaos Papanikolopoulos

The goal of this project is to monitor activities at traffic intersections for detecting/predicting situations that may lead to accidents. Some of the key elements for robust intersection monitoring are camera calibration, motion tracking, incident detection, etc. In this paper, we consider the motion-tracking problem. A multilevel tracking approach using Kalman filter is presented for tracking vehicles and pedestrians at intersections. The approach combines low-level image-based blob tracking with high-level Kalman filtering for position and shape estimation. An intermediate occlusion-reasoning module serves the purpose of detecting occlusions and filtering relevant measurements. Motion segmentation is performed by using a mixture of Gaussian models which helps us achieve fairly reliable tracking in a variety of complex outdoor scenes. A visualization module is also presented. This module is very useful for visualizing the results of the tracker and serves as a platform for the incident detection module.


IEEE Transactions on Intelligent Transportation Systems | 2005

Detection of loitering individuals in public transportation areas

Nathaniel D. Bird; Osama Masoud; Nikolaos Papanikolopoulos; Aaron Isaacs

This paper presents a vision-based method to automatically detect individuals loitering about inner-city bus stops. Using a stationary camera view of a bus stop, pedestrians are segmented and tracked throughout the scene. The system takes snapshots of individuals when a clean, nonobstructed view of a pedestrian is found. The snapshots are then used to classify the individual images into a database, using an appearance-based method. The features used to correlate individual images are based on short-term biometrics, which are changeable but stay valid for short periods of time; this system uses clothing color. A linear discriminant method is applied to the color information to enhance the differences and minimize similarities between the different individuals in the feature space. To determine if a given individual is loitering, time stamps collected with the snapshots in their corresponding database class can be used to judge how long an individual has been present. An experiment was performed using a 30-min video of a busy bus stop with six individuals loitering about it. Results show that the system successfully classifies images of all six individuals as loitering.


Computer Vision and Image Understanding | 2008

Estimating pedestrian counts in groups

Prahlad Kilambi; Evan Ribnick; Ajay J. Joshi; Osama Masoud; Nikolaos Papanikolopoulos

The goal of this work is to provide a system which can aid in monitoring crowded urban environments, which often contain tight groups of people. In this paper, we consider the problem of counting the number of people in the scene and also tracking them reliably. We propose a novel method for detecting and estimating the count of people in groups, dense or otherwise, as well as tracking them. Using prior knowledge obtained from the scene and accurate camera calibration, the system learns the parameters required for estimation. This information can then be used to estimate the count of people in the scene, in real-time. Groups are tracked in the same manner as individuals, using Kalman filtering techniques. Favorable results are shown for groups of various sizes moving in an unconstrained fashion.


IEEE Transactions on Intelligent Transportation Systems | 2005

A vision-based approach to collision prediction at traffic intersections

Stefan Atev; Hemanth K. Arumugam; Osama Masoud; Ravi Janardan; Nikolaos Papanikolopoulos

Monitoring traffic intersections in real time and predicting possible collisions is an important first step towards building an early collision-warning system. We present a vision-based system addressing this problem and describe the practical adaptations necessary to achieve real-time performance. Innovative low-overhead collision-prediction algorithms (such as the one using the time-as-axis paradigm) are presented. The proposed system was able to perform successfully in real time on videos of quarter-video graphics array (VGA) (320 /spl times/ 240) resolution under various weather conditions. The errors in target position and dimension estimates in a test video sequence are quantified and several experimental results are presented.


Image and Vision Computing | 2009

View-independent human motion classification using image-based reconstruction

Robert Bodor; Andrew Drenner; Duc Fehr; Osama Masoud; Nikolaos Papanikolopoulos

We introduce in this paper a novel method for employing image-based rendering to extend the range of applicability of human motion and gait recognition systems. Much work has been done in the field of human motion and gait recognition, and many interesting methods for detecting and classifying motion have been developed. However, systems that can robustly recognize human behavior in real-world contexts have yet to be developed. A significant reason for this is that the activities of humans in typical settings are unconstrained in terms of the motion path. People are free to move throughout the area of interest in any direction they like. While there have been many good classification systems developed in this domain, the majority of these systems have used a single camera providing input to a training-based learning method. Methods that rely on a single camera are implicitly view-dependent. In practice, the classification accuracy of these systems often becomes increasingly poor as the angle between the camera and the direction of motion varies away from the training view angle. As a result, these methods have limited real-world applications, since it is often impossible to limit the direction of motion of people so rigidly. We demonstrate the use of image-based rendering to adapt the input to meet the needs of the classifier by automatically constructing the proper view (image), that matches the training view, from a combination of arbitrary views taken from several cameras. We tested the method on 162 sequences of video data of human motions taken indoors and outdoors, and promising results were obtained.


international conference on robotics and automation | 2006

Real time, online detection of abandoned objects in public areas

Nathaniel D. Bird; Stefan Atev; Nicolas Caramelli; Robert F. K. Martin; Osama Masoud; Nikolaos Papanikolopoulos

This work presents a method for detecting abandoned objects in real-world conditions. The method presented here addresses the online and real time aspects of such systems, utilizes logic to differentiate between abandoned objects and stationary people, and is robust to temporary occlusion of potential abandoned objects. The capacity to not detect still people as abandoned objects is a major aspect that differentiates this work from others in the literature. Results are presented on 3 hours 36 minutes of footage over four videos representing both sparsely and densely populated real-world situations, also differentiating this work from others in the literature


international conference on intelligent transportation systems | 2002

Monitoring crowded traffic scenes

Benjamin Maurin; Osama Masoud; Nikolaos Papanikolopoulos

This paper deals with real-time image processing of crowded outdoor scenes with the objective of creating an effective traffic management system that monitors urban settings (urban intersections, streets after athletic events, etc.). The proposed system can detect, track, and monitor both pedestrians (crowds) and vehicles. We describe the characteristics of the tracker that is based on a new detection method. Initially, we produce a motion estimation map. This map is then segmented and analyzed in order to remove inherent noise and focus on particular regions. Moreover, tracking of these regions is obtained in two steps: fusion and measurement of the current position and velocity, and then estimation of the next position based on a simple model. The instability of tracking is addressed by a multiple-level approach to the problem. The computed data are then analyzed to produce motion statistics. Experimental results from various sites in the Twin Cities area are presented. The final step is to provide this information to an urban traffic management center that monitors crowds and vehicles in the streets.


intelligent robots and systems | 2006

Learning Traffic Patterns at Intersections by Spectral Clustering of Motion Trajectories

Stefan Atev; Osama Masoud; Nikolaos Papanikolopoulos

We address the problem of automatically learning the layout of a traffic intersection from trajectories of vehicles obtained by a vision tracking system. We present a similarity measure which is suitable for use with spectral clustering in problems that emphasize spatial distinctions between vehicle trajectories. The robustness of the method to small perturbations and its sensitivity to the choice of parameters are evaluated using real-world data

Collaboration


Dive into the Osama Masoud's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Atev

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Bodor

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Boley

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Dongwei Cao

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evan Ribnick

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge