Luis Patino
University of Reading
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luis Patino.
advanced video and signal based surveillance | 2010
Luis Patino; Francois Bremond; Murray Evans; Ali Shahrokni; James M. Ferryman
The present work presents a new method for activity extractionand reporting from video based on the aggregationof fuzzy relations. Trajectory clustering is first employedmainly to discover the points of entry and exit of mobiles appearingin the scene. In a second step, proximity relationsbetween resulting clusters of detected mobiles and contextualelements from the scene are modeled employing fuzzyrelations. These can then be aggregated employing typicalsoft-computing algebra. A clustering algorithm based onthe transitive closure calculation of the fuzzy relations allowsbuilding the structure of the scene and characterisesthe ongoing different activities of the scene. Discovered activityzones can be reported as activity maps with differentgranularities thanks to the analysis of the transitive closurematrix. Taking advantage of the soft relation properties, activityzones and related activities can be labeled in a morehuman-like language. We present results obtained on realvideos corresponding to apron monitoring in the Toulouseairport in France.
Image and Vision Computing | 2014
Gerard Sanroma; Luis Patino; Gertjan J. Burghouts; Klamer Schutte; James M. Ferryman
We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition, that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behavior on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.
advanced video and signal based surveillance | 2013
Maria Andersson; Luis Patino; Gertjan J. Burghouts; Adam Flizikowski; Murray Evans; David Gustafsson; Henrik Petersson; Klamer Schutte; James M. Ferryman
In this paper we present a set of activity recognition and localization algorithms that together assemble a large amount of information about activities on a parking lot. The aim is to detect and recognize events that may pose a threat to truck drivers and trucks. The algorithms perform zone-based activity learning, individual action recognition and group detection. Visual sensor data, from one camera, have been recorded for 23 realistic scenarios of different complexities. The scene is complicated and causes uncertain and false position estimates. We also present a situational assessment ontology which serves the algorithms with relevant knowledge about the observed scene (e.g. information about objects, vulnerabilities and historical data). The algorithms are tested with real tracking data and the evaluations show promising results. The accuracies are 90 % for zone-based activity learning, 71 % for individual action recognition and 66 % for group detection (i.e. merging of people).
advanced video and signal based surveillance | 2012
Luis Patino; Francois Bremond; Monique Thonnat
The present work introduces a new method for activity extraction from video. To achieve this, we focus on the modelling of context by developing an algorithm that automatically learns the main activity zones of the observed scene by taking as input the trajectories of detected mobiles. Automatically learning the context of the scene (activity zones) allows first to extract a knowledge on the occupancy of the different areas of the scene. In a second step, learned zones are employed to extract people activities by relating mobile trajectories to the learned zones, in this way, the activity of a person can be summarised as the series of zones that the person has visited. For the analysis of the trajectory, a multiresolution analysis is set such that a trajectory is segmented into a series of tracklets based on changing speed points thus allowing differentiating when people stop to interact with elements of the scene or other persons. Tracklets allow thus to extract behavioural information. Starting and ending tracklet points are fed to a simple yet advantageous incremental clustering algorithm to create an initial partition of the scene. Similarity relations between resulting clusters are modeled employing fuzzy relations. These can then be aggregated with typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fuzzy relations allows building the final structure of the scene. To allow for incremental learning and update of activity zones (and thus people activities), fuzzy relations are defined with online learning terms. We present results obtained on real videos from different activity domains.
advanced video and signal based surveillance | 2015
Luis Patino; James M. Ferryman; Csaba Beleznai
In this paper we perform an analysis of human behaviour for people standing in a queue with the aim to discover, in an unsupervised way, ongoing unusual or suspicious activities. The main activity types we focus on are detecting people loitering around the queue and people going against the flow of the queue or undertaking a suspicious path. The proposed approach works by first detecting and tracking moving individuals from a stereo depth map in real time. Activity zones (including queue zones) are then automatically learnt employing a soft computing-based algorithm which takes as input the trajectory of detected mobile objects. Statistical properties on zone occupancy and transition between zones makes it possible to discover abnormalities without the need to learn abnormal models beforehand. The approach has been tested on a dataset realistically representing a border crossing and its environment. The current results suggest that the proposed approach constitutes a robust knowledge discovery tool able to extract queue abnormalities.
Archive | 2011
Luis Patino; Monique Thonnat
Scene understanding corresponds to the real time process of perceiving, analysing and elaborating an interpretation of a 3D dynamic scene observed through a network of cameras. The whole challenge consists in managing this huge amount of information and in structuring all the knowledge. On-line Clustering is an efficient manner to process such huge amounts of data. On-line processing is indeed an important capability required to perform monitoring and behaviour analysis on a long-term basis. In this paper we show how a simple clustering algorithm can be tuned to perform on-line. The system works by finding the main trajectory patterns of people in the video. We present results obtained on real videos corresponding to the monitoring of the Toulouse airport in France.
international symposium on neural networks | 2010
Luis Patino; Francois Bremond; Monique Thonnat
The present work presents a novel approach for activity extraction and knowledge discovery from video. Spatial and temporal properties from detected mobile objects are modeled employing fuzzy relations. These can then be aggregated employing typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fuzzy relations allows finding spatio-temporal patterns of activity. We employ trajectory-based analysis of mobiles in the video to discover the points of entry and exit of mobiles appearing in the scene and ultimately deduce the different areas of activity in the scene. These areas can be reported as activity maps with different granularities thanks to the analysis of the transitive closure matrix of the mobile fuzzy spatial relations. Discovered activity zones and spatio-temporal patterns of activity can be labeled in a human-like language. We present results obtained on real videos corresponding to apron monitoring in the Toulouse airport in France.
computer vision and pattern recognition | 2016
Luis Patino; James M. Ferryman
Threat detection in computer vision can be achieved by extraction of behavioural cues. To achieve recognition of such cues, we propose to work with Semantic Models of behaviours. Semantic Models correspond to the translation of Low-Level information (tracking information) into High-Level semantic description. The model is then similar to a naturally spoken description of the event. We have built semantic models for the behaviours and threats addressed in the PETS 2016 IPATCH dataset. Semantic models can trigger a threat alarm by themselves or give situation awareness. We describe in this paper how semantic models are built from Low-Level trajectory features and how they are recognised. The current results are promising.
advanced video and signal based surveillance | 2014
Luis Patino; James M. Ferryman
In this paper we propose an innovative approach for behaviour recognition, from a multicamera environment, based on translating video activity into semantics. First, we fuse tracks from individual cameras through clustering employing soft computing techniques. Then, we introduce a higher-level module able to translate fused tracks into semantic information. With our proposed approach, we address the challenge set in PETS 2014 [1] on recognising behaviours of interest around a parked vehicle, namely the abnormal behaviour of someone walking around the vehicle.
Applied Soft Computing | 2014
Luis Patino; James M. Ferryman
Graphical abstractDisplay Omitted HighlightsWe present a soft computing-based approach for automatic activity extraction from video.The proposed approach learns the activity model in an unsupervised way.Activities are characterised and analysed at different resolutions.Semantic information is delivered according to the resolution at which the activity is observed.The approach detects abnormalities based on analysis of statistics of the observed activities at different resolutions.The approach is generic and works both, indoors and outdoors. This paper addresses the issue of activity understanding from video and its semantics-rich description. A novel approach is presented where activities are characterised and analysed at different resolutions. Semantic information is delivered according to the resolution at which the activity is observed. Furthermore, the multiresolution activity characterisation is exploited to detect abnormal activity. To achieve these system capabilities, the focus is given on context modelling by employing a soft computing-based algorithm which automatically enables the determination of the main activity zones of the observed scene by taking as input the trajectories of detected mobiles. Such areas are learnt at different resolutions (or granularities). In a second stage, learned zones are employed to extract people activities by relating mobile trajectories to the learned zones. In this way, the activity of a person can be summarised as the series of zones that the person has visited. Employing the inherent soft relation properties, the reported activities can be labelled with meaningful semantics. Depending on the granularity at which activity zones and mobile trajectories are considered, the semantic meaning of the activity shifts from broad interpretation to detailed description. Activity information at different resolutions is also employed to perform abnormal activity detection.