Lee Middleton
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lee Middleton.
Fourth IEEE Workshop on Automatic Identification Advanced Technologies (AutoID'05) | 2005
Lee Middleton; Alex A. Buss; Alex I. Bazin; Mark S. Nixon
This paper describes the development of a prototype floor sensor as a gait recognition system. This could eventually find deployment as a standalone system (e.g. a burglar alarm system) or as part of a multimodal biometric system. The new sensor consists of 1536 individual sensors arranged in a 3 m by 0.5 m rectangular strip with an individual sensor area of 3 cm/sup 2/. The sensor floor operates at a sample rate of 22 Hz. The sensor itself uses a simple design inspired by computer keyboards and is made from low cost, off the shelf materials. Application of the sensor floor to a small database of 15 individuals was performed. Three features were extracted : stride length, stride cadence, and time on toe to time on heel ratio. Two of these measures have been used in video based gait recognition while the third is new to this analysis. These features proved sufficient to achieve an 80% recognition rate.
IEEE Intelligent Systems | 2014
Stuart E. Middleton; Lee Middleton; Stefano Modafferi
The proposed social media crisis mapping platform for natural disasters uses locations from gazetteer, street map, and volunteered geographic information (VGI) sources for areas at risk of disaster and matches them to geoparsed real-time tweet data streams. The authors use statistical analysis to generate real-time crisis maps. Geoparsing results are benchmarked against existing published work and evaluated across multilingual datasets. Two case studies compare five-day tweet crisis maps to official post-event impact assessment from the US National Geospatial Agency (NGA), compiled from verified satellite and aerial imagery sources.
Image and Vision Computing | 2001
Lee Middleton; Jayanthi Sivaswamy
Abstract With processing power of computers and capabilities of graphics devices increasing rapidly, the time is ripe to reconsider using hexagonal sampling for computer vision in earnest. This paper reports on an investigation of edge detection in the context of hexagonally sampled images. It presents a complete framework for processing hexagonally sampled images which addresses four key aspects: conversion of square to hexagonally sampled images, storage, processing, and display of these images. Results from using edge detection on this framework show that (a) the computational requirement for processing a hexagonally sampled image is less than that for square sampled images, and (b) a better qualitative performance which is due to the compact and circular nature of the hexagonal lattice. This last point needs to be exploited in the development of edge detectors for hexagonally sampled images.
international conference on information fusion | 2005
Galina V. Veres; Mark S. Nixon; Lee Middleton; John N. Carter
Gait recognition aims to identify people at a distance by the way they walk. This paper deals with a problem of recognition by gait when time-dependent covariates are added. Properties of gait can be categorized as static and dynamic features, which we derived from sequences of images of walking subjects. We show that recognition rates fall significantly when gait data is captured over a lengthy time interval. A new fusion algorithm is suggested in the paper wherein the static and dynamic features are fused to obtain optimal performance. The new fusion algorithm divides decision situations into three categories. The first case is when more than two thirds of the classifiers agreed to assign identity to the same class. The second case is when the two different classes are selected by each half of classifiers. The rest falls into the third case. The suggested fusion rule was compared with the most popular fusion rules for biometrics. It is shown that the new fusion rule over-performs the established techniques.
asian conference on computer vision | 2010
Galina V. Veres; Helmut Grabner; Lee Middleton; Luc Van Gool
Robust automatic workflow monitoring using visual sensors in industrial environments is still an unsolved problem. This is mainly due to the difficulties of recording data in work settings and the environmental conditions (large occlusions, similar background/foreground) which do not allow object detection/tracking algorithms to perform robustly. Hence approaches analysing trajectories are limited in such environments. However, workflow monitoring is especially needed due to quality and safety requirements. In this paper we propose a robust approach for workflow classification in industrial environments. The proposed approach consists of a robust scene descriptor and an efficient time series analysis method. Experimental results on a challenging car manufacturing dataset showed that the proposed scene descriptor is able to detect both human and machinery related motion robustly and the used time series analysis method can classify tasks in a given workflow automatically.
intelligent robots and systems | 2006
Lee Middleton; David Kenneth Wagg; Alex I. Bazin; John N. Carter; Mark S. Nixon
The development of large scale biometric systems requires experiments to be performed on large amounts of data. Existing capture systems are designed for fixed experiments and are not easily scalable. In this scenario even the addition of extra data is difficult. We developed a prototype biometric tunnel for the capture of non-contact biometrics. It is self contained and autonomous. Such a configuration is ideal for building access or deployment in secure environments. The tunnel captures cropped images of the subjects face and performs a 3D reconstruction of the persons motion which is used to extract gait information. Interaction between the various parts of the system is performed via the use of an agent framework. The design of this system is a trade-off between parallel and serial processing due to various hardware bottlenecks. When tested on a small population the extracted features have been shown to be potent for recognition. We currently achieve a moderate throughput of approximate 15 subjects an hour and hope to improve this in the future as the prototype becomes more complete
conference on automation science and engineering | 2006
Lee Middleton; David Kenneth Wagg; Alex I. Bazin; John N. Carter; Mark S. Nixon
Current biometric capture methodologies were born in a laboratory environment. In this scenario you have cooperative subjects, large time capture windows, and staff to edit and mark up data as necessary. However, as biometrics moves from the laboratory these factors impinge upon the scalability of the system. In this work we developed a prototype biometric tunnel for the capture of non-contact biometrics. The system is autonomous to maximise subject throughput and self-contained to allow flexible deployment and user friendliness. Currently we deploy 8 cameras to capture the 3D motion (specifically gait) and 1 camera to capture the face of a subject. The gait and face information thus extracted can be used for subsequent biometric analysis. Interaction between the various system components is performed via the use of an agent framework. Performance analysis of the current system shows that we can currently achieve a moderate throughput of 15 subjects per hour. Additionally, analysis performed upon the biometric features extracted from a small population show them to be potent for recognition
international symposium on visual computing | 2015
Zoheir Sabeur; Nikolaos D. Doulamis; Lee Middleton; Banafshe Arbab-Zavar; Gianluca Correndo; Aggelos Amditis
Crowd physical motion and behaviour detection during evacuation from confined spaces using computer vision is the main focus of research in the eVACUATE project. Its early foundations and development perspectives are discussed in this paper. Specifically, the main target in our development is to achieve good rates of correct detection and classification of crowd motion and behaviour in confined spaces respectively. However, the performance of the computer vision algorithms, which are put in place for the detection of crowd motion and behaviour, greatly depends on the quality, including causality, of the multi-modal observation data with ground truth. Furthermore, it is of paramount importance to take into account contextual information about the confined spaces concerned in order to confirm the type of detected behaviours. The pilot venues for crowd evacuation experimentations include: (1) Athens International Airport, Greece; (2) An underground train station in Bilbao, Spain; (3) A stadium in San Sebastian, Spain; and (4) A large cruise ship in St. Nazaire, France.
international symposium on environmental software systems | 2013
Antonios Bonatsos; Lee Middleton; Panos Melas; Zoheir Sabeur
This paper describes the major research and development activities which have been achieved so far since the launch of the DESURBS project (www.desurbs.eu) in 2011. The project focuses on the development of a Decision-Support System Portal (DSSP) which integrates information, data and software modules representing city assets, hazards and processing models that simulate exposures to risks and potential compromise to safety and security. The use of the DSSP will aid the design of safer and more resilient urban spaces. Specifically, it provides security related scenarios with contextual information to support various types of users who specialise in urban spatial design and planning. The DSSP is a web enabled system which is also adapted to mobile devices usage. It is supported with geographic maps and visualised aggregated data from a number of heterogeneous sources. A responsive web design which adapts to the resolution of smart mobile devices has also been achieved. That is, low powered mobiles can still provide map oriented data in a responsive fashion, while using multiple platforms (Android and iOS currently). The first DSSP prototype employs the United Kingdom crime statistics feed of year 2012 and analyses crime trends in 13 English Cities (including Greater London) which are distributed into four major-regions. The DSSP displays raw crime data via a marker on a map, while they are aggregated under specific crime type threads and visualised as “heat maps”. The specific visualisations are aligned to the various administrative regions such as neighbourhoods, catchments and postcodes. It also allows users to explore historical crime trends for a region over time, where crime statistics are contrasted. The scalability of the DSSP was also tested under increasingly large datasets and numbers of users, with tested loads on the map server and the main Django user application. The difference in speed between the mobile and desktop interfaces for a defined set of tasks using the application shall also be performed and presented in the near future.
international conference on knowledge based and intelligent information and engineering systems | 2005
Michael O. Jewell; Lee Middleton; Mark S. Nixon; Adam Prügel-Bennett; Sylvia C. Wong
Current techniques for automated composition use a single algorithm, focusing on one aspect of musical generation. In our system we make use of several algorithms, distributed using an agent oriented middleware, with each specialising on a separate aspect of composition. This paper describes the architecture and algorithms behind this system, with a focus on the agent framework used for implementation. We show early results which encourage a future application of this framework in automated music composition and analysis.