Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James M. Ferryman is active.

Publication


Featured researches published by James M. Ferryman.


Pattern Recognition Letters | 2013

A survey of human motion analysis using depth imagery

Lulu Chen; Hong Wei; James M. Ferryman

Analysis of human behaviour through visual information has been a highly active research topic in the computer vision community. This was previously achieved via images from a conventional camera, however recently depth sensors have made a new type of data available. This survey starts by explaining the advantages of depth imagery, then describes the new sensors that are available to obtain it. In particular, the Microsoft Kinect has made high-resolution real-time depth cheaply available. The main published research on the use of depth imagery for analysing human activity is reviewed. Much of the existing work focuses on body part detection and pose estimation. A growing research area addresses the recognition of human actions. The publicly available datasets that include depth imagery are listed, as are the software libraries that can acquire it from a sensor. This survey concludes by summarising the current state of work on this topic, and pointing out promising future research directions. For both researchers and practitioners who are familiar with this topic and those who are new to this field, the review will aid in the selection, and development, of algorithms using depth data.


2009 Twelfth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance | 2009

PETS2009: Dataset and challenge

James M. Ferryman; Ali Shahrokni

This paper describes the crowd image analysis challenge that forms part of the PETS 2009 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.


International Journal of Computer Vision | 1998

Visual surveillance for moving vehicles

James M. Ferryman; Stephen J. Maybank; Anthony D. Worrall

An overview is given of a vision system for locating, recognising and tracking multiple vehicles, using an image sequence taken by a single camera mounted on a moving vehicle. The camera motion is estimated by matching features on the ground plane from one image to the next. Vehicle detection and hypothesis generation are performed using template correlation and a 3D wire frame model of the vehicle is fitted to the image. Once detected and identified, vehicles are tracked using dynamic filtering. A separate batch mode filter obtains the 3D trajectories of nearby vehicles over an extended time. Results are shown for a motorway image sequence.


british machine vision conference | 1995

A generic deformable model for vehicle recognition

James M. Ferryman; Anthony D. Worrall; Geoffrey D. Sullivan; Keith D. Baker

This paper reports the development of a highly parameterised 3-D model able to adopt the shapes of a wide variety of different classes of vehicles (cars, vans, buses, etc), and its subsequent specialisation to a generic car class which accounts for most commonly encountered types of car (includng saloon, hatchback and estate cars). An interactive tool has been developed to obtain sample data for vehicles from video images. A PCA description of the manually sampled data provides a deformable model in which a single instance is described as a 6 parameter vector. Both the pose and the structure of a car can be recovered by fitting the PCA model to an image. The recovered description is sufficiently accurate to discriminate between vehicle sub-classes.


machine vision applications | 2007

Video understanding for complex activity recognition

Florent Fusier; Valéry Valentin; Francois Bremond; Monique Thonnat; Mark Borg; David Thirde; James M. Ferryman

This paper presents a real-time video understanding system which automatically recognises activities occurring in environments observed through video surveillance cameras. Our approach consists in three main stages: Scene Tracking, Coherence Maintenance, and Scene Understanding. The main challenges are to provide a robust tracking process to be able to recognise events in outdoor and in real applications conditions, to allow the monitoring of a large scene through a camera network, and to automatically recognise complex events involving several actors interacting with each others. This approach has been validated for Airport Activity Monitoring in the framework of the European project AVITRACK.


international conference on computer communications and networks | 2005

PETS Metrics: On-Line Performance Evaluation Service

David Paul Young; James M. Ferryman

This paper presents the PETS Metrics On-line Evaluation Service for computational visual surveillance algorithms. The service allows researchers to submit their algorithm results for evaluation against a set of applicable metrics. The results of the evaluation processes are publicly displayed allowing researchers to instantly view how their algorithm performs against previously submitted algorithms. The approach has been validated using seven motion segmentation algorithms.


advanced video and signal based surveillance | 2010

PETS2010 and PETS2009 Evaluation of Results Using Individual Ground Truthed Single Views

Anna Ellis; James M. Ferryman

This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness.


international conference on computer communications and networks | 2005

Visual Surveillance for Aircraft Activity Monitoring

David Thirde; Mark Borg; Valéry Valentin; Florent Fusier; Josep Aguilera; James M. Ferryman; Francois Bremond; M. Thonnat; Martin Kampel

This paper presents a visual surveillance system for the automatic scene interpretation of airport aprons. The system comprises two modules - scene tracking and scene understanding. The scene tracking module, comprising a bottom-up methodology, and the scene understanding module, comprising a video event representation and recognition scheme, have been demonstrated to be a valid approach for apron monitoring


IEEE Signal Processing Magazine | 2013

Video surveillance: past, present, and now the future [DSP Forum]

Fatih Porikli; Francois Bremond; Shiloh L. Dockstader; James M. Ferryman; Anthony Hoogs; Brian C Lovell; Sharath Pankanti; Bernhard Rinner; Peter Henry Tu; Péter L. Venetianer

Video surveillance is a part of our daily life, even though we may not necessarily realize it. We might be monitored on the street, on highways, at ATMs, in public transportation vehicles, inside private and public buildings, in the elevators, in front of our television screens, next to our baby?s cribs, and any spot one can set a camera.


international conference on computer communications and networks | 2005

Evaluation of Motion Segmentation Quality for Aircraft Activity Surveillance

Josep Aguilera; Horst Wildenauer; Martin Kampel; Mark Borg; David Thirde; James M. Ferryman

Recent interest has been shown in performance evaluation of visual surveillance systems. The main purpose of performance evaluation on computer vision systems is the statistical testing and tuning in order to improve algorithms reliability and robustness. In this paper we investigate the use of empirical discrepancy metrics for quantitative analysis of motion segmentation algorithms. We are concerned with the case of visual surveillance on an airports apron, that is the area where aircrafts are parked and serviced by specialized ground support vehicles. Robust detection of individuals and vehicles is of major concern for the purpose of tracking objects and understanding the scene. In this paper, different discrepancy metrics for motion segmentation evaluation are illustrated and used to assess the performance of three motion segmentors on video sequences of an airports apron.

Collaboration


Dive into the James M. Ferryman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Borg

University of Reading

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Wild

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Kampel

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lulu Chen

University of Reading

View shared research outputs
Researchain Logo
Decentralizing Knowledge