Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sergio A. Velastin is active.

Publication


Featured researches published by Sergio A. Velastin.


IEEE Transactions on Intelligent Transportation Systems | 2011

A Review of Computer Vision Techniques for the Analysis of Urban Traffic

Norbert Erich Buch; Sergio A. Velastin; James Orwell

Automatic video analysis from urban surveillance cameras is a fast-emerging field based on computer vision techniques. We present here a comprehensive review of the state-of-the-art computer vision for traffic video with a critical analysis and an outlook to future research directions. This field is of increasing relevance for intelligent transport systems (ITSs). The decreasing hardware cost and, therefore, the increasing deployment of cameras have opened a wide application field for video analytics. Several monitoring objectives such as congestion, traffic rule violation, and vehicle interaction can be targeted using cameras that were typically originally installed for human operators. Systems for the detection and classification of vehicles on highways have successfully been using classical visual surveillance techniques such as background estimation and motion tracking for some time. The urban domain is more challenging with respect to traffic density, lower camera angles that lead to a high degree of occlusion, and the variety of road users. Methods from object categorization and 3-D modeling have inspired more advanced techniques to tackle these challenges. There is no commonly used data set or benchmark challenge, which makes the direct comparison of the proposed algorithms difficult. In addition, evaluation under challenging weather conditions (e.g., rain, fog, and darkness) would be desirable but is rarely performed. Future work should be directed toward robust combined detectors and classifiers for all road users, with a focus on realistic conditions during evaluation.


Image and Vision Computing | 2006

People tracking in surveillance applications

Luis M. Fuentes; Sergio A. Velastin

Abstract This paper presents a real-time algorithm that allows robust tracking of multiple objects in complex environments. Foreground pixels are detected using luminance contrast and grouped into blobs. Blobs from two consecutive frames are matched creating the matching matrices. Tracking is performed using direct and inverse matching matrices. This method successfully solves blobs merging and splitting. Some application in automatic surveillance systems are suggested by linking trajectories and blob position information with the events to be detected.


machine vision applications | 2008

How close are we to solving the problem of automated visual surveillance?: A review of real-world surveillance, scientific progress and evaluative mechanisms

Hannah Dee; Sergio A. Velastin

The problem of automated visual surveillance has spawned a lively research area, with 2005 seeing three conferences or workshops and special issues of two major journals devoted to the topic. These alone are responsible for somewhere in the region of 240 papers and posters on automated visual surveillance before we begin to count those presented in more general fora. Many of these systems and algorithms perform one small sub-part of the surveillance task, such as motion detection. But even with low level image processing tasks it is often difficult to compare systems on the basis of published results alone. This review paper aims to answer the difficult question “How close are we to developing surveillance related systems which are really useful?” The first section of this paper considers the question of surveillance in the real world: installations, systems and practises. The main body of the paper then considers existing computer vision techniques with an emphasis on higher level processes such as behaviour modelling and event detection. We conclude with a review of the evaluative mechanisms that have grown from within the computer vision community in an attempt to provide some form of robust evaluation and cross-system comparability.


systems man and cybernetics | 2005

PRISMATICA: toward ambient intelligence in public transport environments

Sergio A. Velastin; Boghos A. Boghossian; Benny P. L. Lo; Jie Sun; Maria Alicia Vicencio-Silva

On-line surveillance to improve safety and security is a major requirement for the management of public transport networks and other public places. The surveillance task is a complex one involving people, management procedures, and technology. This work describes an architecture that takes into account the distributed nature of the detection processes and the need to allow for different types of devices and actuators. This was part of a major European initiative on intelligent transport systems. Because of the dominant nature of closed circuit television in surveillance, This work describes in detail a computer-vision module used in the system and its particular ability to detect situations of interest in busy conditions. The system components have been implemented, integrated, and tested in real metropolitan railway environments and are considered to be the first step toward providing ambient intelligence in such complex scenarios. Results are presented that not only deal with detection performance, but also on the perception of people who used the system on its effectiveness and potential impact.


advanced video and signal based surveillance | 2010

MuHAVi: A Multicamera Human Action Video Dataset for the Evaluation of Action Recognition Methods

Sanchit Singh; Sergio A. Velastin; Hossein Ragheb

This paper describes a body of multicamera humanaction video data with manually annotated silhouette datathat has been generated for the purpose of evaluatingsilhouette-based human action recognition methods. Itprovides a realistic challenge to both the segmentationand human action recognition communities and can act asa benchmark to objectively compare proposed algorithms.The public multi-camera, multi-action dataset is animprovement over existing datasets (e.g. PETS, CAVIAR,soccerdataset) that have not been developed specificallyfor human action recognition and complements otheraction recognition datasets (KTH, Weizmann, IXMAS,HumanEva, CMU Motion). It consists of 17 action classes,14 actors and 8 cameras. Each actor performs an actionseveral times in the action zone. The paper describes thedataset and illustrates a possible approach to algorithmevaluation using a previously published action simplerecognition method. In addition to showing an evaluationmethodology, these results establish a baseline for otherresearchers to improve upon.


advanced video and signal based surveillance | 2009

Recognizing Human Actions Using Silhouette-based HMM

Francisco Martínez-Contreras; Elias Herrero-Jaraba; Hossein Ragheb; Sergio A. Velastin

This paper addresses the problem of silhouette-based human action modeling and recognition, specially when the number of action samples is scarce. The first step of the proposed system is the 2D modeling of human actions based on motion templates, by means of Motion History Images (MHI). These templates are projected into a new subspace using the Kohonen Self Organizing feature Map (SOM), which groups viewpoint (spatial) and movement (temporal) in a principal manifold, and models the high dimensional space of static templates.The next step is based on the Hidden Markov Models (HMM) in order to track the map behavior on the temporal sequences of MHI. Every new MHI pattern is compared with the features map obtained during the training. The index of the winner neuron is considered as discrete observation for the HMM. If the number of samples is not enough, a sampling technique, the Sampling Importance Resampling (SIR) algorithm, is applied in order to increase the number of observations for the HMM. Finally, temporal pattern recognition is accomplished by a Maximum Likelihood (ML) classifier. We demonstrate this approach on two publicly available dataset: one based on real actors and another one based on virtual actors.


international conference on intelligent transportation systems | 2012

Vehicle detection, tracking and classification in urban traffic

Zezhi Chen; Tim Ellis; Sergio A. Velastin

This paper presents a system for vehicle detection, tracking and classification from roadside CCTV. The system counts vehicles and separates them into four categories: car, van, bus and motorcycle (including bicycles). A new background Gaussian Mixture Model (GMM) and shadow removal method have been used to deal with sudden illumination changes and camera vibration. A Kalman filter tracks a vehicle to enable classification by majority voting over several consecutive frames, and a level set method has been used to refine the foreground blob. Extensive experiments with real world data have been undertaken to evaluate system performance. The best performance results from training a SVM (Support Vector Machine) using a combination of a vehicle silhouette and intensity-based pyramid HOG features extracted following background subtraction, classifying foreground blobs with majority voting. The evaluation results from the videos are encouraging: for a detection rate of 96.39%, the false positive rate is only 1.36% and false negative rate 4.97%. Even including challenging weather conditions, classification accuracy is 94.69%.


Archive | 2006

Intelligent distributed video surveillance systems

Sergio A. Velastin; Paolo Remagnino

* Chapter 1: A review of the state-of-the-art in distributed surveillance systems * Chapter 2: Monitoring practice: event detection and system design * Chapter 3: A distributed database for effective management and evaluation of CCTV systems * Chapter 4: A distributed domotic surveillance system * Chapter 5: A general-purpose system for distributed surveillance and communication * Chapter 6: Tracking objects across uncalibrated, arbitrary topology camera networks * Chapter 7: A distributed multi-sensor surveillance system for public transport applications * Chapter 8: Tracking football players with multiple cameras * Chapter 9: A hierarchical multi-sensor framework for event detection in wide environments


Proceedings of the 1st ACM workshop on Vision networks for behavior analysis | 2008

ViHASi: virtual human action silhouette data for the performance evaluation of silhouette-based action recognition methods

Hossein Ragheb; Sergio A. Velastin; Paolo Remagnino; Tim Ellis

In this paper we introduce a large body of virtual human action silhouette (ViHASi) data that we have recently generated for the purpose of evaluating a family of action recognition methods. These are the silhouette-based human action recognition methods. This synthetic multi-camera video data-set consists of 20 action classes, 9 actors and up to 40 synchronized perspective cameras. This data-set has been recently made available online for other researchers to download. In order to demonstrate the usefulness of the ViHASi data we make use of an existing action recognition method that is simple and relatively fast. Moreover, to deal with long video sequences containing several action samples, a practical temporal segmentation algorithm is introduced and tested that is tightly coupled with the action recognition method used. The experimental methodologies outlined here provides a route towards quantitatively comparing silhouette-based action recognition methods.


british machine vision conference | 2009

3D extended histogram of oriented gradients (3DHOG) for classification of road users in urban scenes

Norbert Erich Buch; James Orwell; Sergio A. Velastin

This paper proposes and demonstrates a novel method for the detection and classification of individual vehicles and pedestrians in urban scenes. In this scenario, shadows, lights and various occlusions compromise the accuracy of foreground segmentation and hence there are challenges with conventional silhouette-based methods. 2D features derived from histograms of oriented gradients (HOG) have been shown to be effective for detecting pedestrians and other objects. However, the appearance of vehicles varies substantially with the viewing angle and local features may be often occluded. In this paper, a novel method is proposed that overcomes limitations in the use of 2D HOG. Full 3D models are used for the object categories to be detected and the feature patches are defined over these models. A calibrated camera allows an affine transform of the observation into a normalised representation from which ‘3DHOG’ features are defined. A variable set of interest points is used in the detection and classification processes, depending on which points in the 3D model are visible. Experiments on real CCTV data of urban scenes demonstrate the proposed method. The 3DHOG feature is compared with features based on FFT and simple histograms. A baseline method using overlap between wire-frame models and motion silhouettes is also included. The results demonstrate that the proposed method achieves comparable performance. In particular, an advantage of the proposed method is that it is more robust than motion silhouettes which are often compromised in real data by variable lighting, camera quality and occlusions from other objects.

Collaboration


Dive into the Sergio A. Velastin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Muhammad Haroon Yousaf

University of Engineering and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge