Galina V. Veres
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Galina V. Veres.
computer vision and pattern recognition | 2004
Galina V. Veres; Layla Gordon; John N. Carter; Mark S. Nixon
Gait recognition has recently gained significant attention, especially in vision-based automated human identification at a distance in visual surveillance and monitoring applications. Silhouette-based gait recognition is one of the most popular methods for recognising moving shapes. This paper aims to investigate the important features in silhouette-based gait recognition from point of view of statistical analysis. It is shown that the average silhouette includes a static component of gait (head and body) as the most important image part, while dynamic component of gait (swings of legs and arms) is ignored as the least important information. At the same time ignoring dynamic part of gait can result in loss in recognition rate in some cases, and the importance of better motion estimation is underlined.
international conference on information fusion | 2005
Galina V. Veres; Mark S. Nixon; Lee Middleton; John N. Carter
Gait recognition aims to identify people at a distance by the way they walk. This paper deals with a problem of recognition by gait when time-dependent covariates are added. Properties of gait can be categorized as static and dynamic features, which we derived from sequences of images of walking subjects. We show that recognition rates fall significantly when gait data is captured over a lengthy time interval. A new fusion algorithm is suggested in the paper wherein the static and dynamic features are fused to obtain optimal performance. The new fusion algorithm divides decision situations into three categories. The first case is when more than two thirds of the classifiers agreed to assign identity to the same class. The second case is when the two different classes are selected by each half of classifiers. The rest falls into the third case. The suggested fusion rule was compared with the most popular fusion rules for biometrics. It is shown that the new fusion rule over-performs the established techniques.
Lecture Notes in Computer Science | 2005
Galina V. Veres; Mark S. Nixon; John N. Carter
This paper deals with a problem of recognition by gait when time-dependent covariates are added, i.e. when 6 months have passed between recording of the gallery and the probe sets. We show how recognition rates fall significantly when data is captured between lengthy time intevals, for static and dynamic gait features. Under the assumption that it is possible to have some subjects from the probe for training and that similar subjects have similar changes in gait over time, a predictive model of changes in gait is suggested in this paper, which can improve the recognition capability. A small number of subjects were used for training and a much large number for classification and the probe contains the covariate data for a smaller number of subjects. Our new predictive model derives high recognition rates for different features which is a considerable improvement on recognition capability without this new approach.
asian conference on computer vision | 2010
Galina V. Veres; Helmut Grabner; Lee Middleton; Luc Van Gool
Robust automatic workflow monitoring using visual sensors in industrial environments is still an unsolved problem. This is mainly due to the difficulties of recording data in work settings and the environmental conditions (large occlusions, similar background/foreground) which do not allow object detection/tracking algorithms to perform robustly. Hence approaches analysing trajectories are limited in such environments. However, workflow monitoring is especially needed due to quality and safety requirements. In this paper we propose a robust approach for workflow classification in industrial environments. The proposed approach consists of a robust scene descriptor and an efficient time series analysis method. Experimental results on a challenging car manufacturing dataset showed that the proposed scene descriptor is able to detect both human and machinery related motion robustly and the used time series analysis method can classify tasks in a given workflow automatically.
Neural Networks | 2011
Athanasios Voulodimos; Dimitrios I. Kosmopoulos; Galina V. Veres; Helmut Grabner; Luc Van Gool; Theodora A. Varvarigou
Modelling and classification of time series stemming from visual workflows is a very challenging problem due to the inherent complexity of the activity patterns involved and the difficulty in tracking moving targets. In this paper, we propose a framework for classification of visual tasks in industrial environments. We propose a novel method to automatically segment the input stream and to classify the resulting segments using prior knowledge and hidden Markov models (HMMs), combined through a genetic algorithm. We compare this method to an echo state network (ESN) approach, which is appropriate for general-purpose time-series classification. In addition, we explore the applicability of several fusion schemes for multicamera configuration in order to mitigate the problem of limited visibility and occlusions. The performance of the suggested approaches is evaluated on real-world visual behaviour scenarios.
international conference on intelligent sensors, sensor networks and information processing | 2005
Galina V. Veres; Mark S. Nixon; John N. Carter
Interest in automated biometrics continues to increase, but has little consideration of time which are especially important in surveillance and scan control. This paper deals with a problem of recognition by gait when time-dependent covariates are added, i.e. when 6 or 12 months have passed between recording of the gallery and the probe sets, and in some cases some extra covariates present as well. We have shown previously how recognition rates fall significantly when data is captured between lengthy time intevals. Under the assumption that it is possible to have some subjects from the probe for training and that similar subjects have similar changes in gait over time, we suggest predictive models of changes in gait due both to time and now to time-invariant covariates. Our extended time-dependent predictive model derives high recognition rates when time-dependent or subject-dependent covariates are added. However it is not able to cope with time-invariant covariates, therefore a new time-invariant predictive model is suggested to accommodate extra covariates. These are combined to achieve a predictive model which takes into consideration all types of covariates. A considerable improvement in recognition capability is demonstrated, showing that changes can be modelled successfully by the new approach.
international conference on image analysis and recognition | 2006
Galina V. Veres; Mark S. Nixon; John N. Carter
Gait recognition has become a popular new biometric in the last decade. Good recognition results have been achieved using different gait techniques on several databases. However, not much attention has been paid to get major questions: how good are biometrics data; how many subjects are needed to cover diversity of population (hypothetical or actual) in gait and how many samples per subject will give good representation of similarities and differences in the gait of the same subject. In this paper we try to answer these questions from the point of view of statistical analysis not only for gait recognition but for other biometrics as well. Though we do not think that we have a whole answer, we content this is the start of the answer.
international conference on information fusion | 2010
Zlatko Zlatev; Stuart E. Middleton; Galina V. Veres
We have benchmarked a novel knowledge-assisted kriging algorithm that allows regions of spatial cohesion to be specified and variograms calculated for each region. The variogram calculation itself is automated and spatial regions created via offline automated segmentation of either expert-drawn Google Earth polygons or NASA altitude data. Our use-case is to create interpolated wind maps for input into a bathing water quality model of microbial contamination. We benchmark our knowledge-assisted kriging algorithm against 7 other algorithms on UK met-office wind data (189 sensors). Our wind estimation results are comparable to standard ordinary kriging using variograms created by an expert. When using spatial segmentation we find our kriging error maps reflect better the known spatial features of the interpolated phenomenon. These results are very promising for an automated approach allowing on-demand datasets selection and real-time interpolation of previously unknown measurements. Automation is important in progressing towards a pan-European interpolation service capability.
american control conference | 2003
Galina V. Veres; Owen R. Tutty; Eric Rogers; P.A. Nelson
Turbulent flow has a significantly higher drag than the corresponding laminar flow for the same flow conditions. The presence of turbulent flow over a large part of an aircraft therefore incurs a significant penalty of increased fuel consumption due to the extra thrust required. One possible way of decreasing the drag is to apply surface suction to delay the transition from laminar to turbulent flow. However, in order for the gain from the reduction in drag to outweigh the extra costs associated with the suction system, the suction must be distributed in an optimum, or near optimum, manner. This paper investigates methods for the design of multi-panel suction systems using optimisation methods based on direct search techniques. It is shown that for the problems considered, good solutions can be found efficiently.
acm multimedia | 2010
Roland Mörzinger; Manolis Sardis; Igor Rosenberg; Helmut Grabner; Galina V. Veres; Imed Bouchrika; Marcus Thaler; René Schuster; Albert Hofmann; Georg Thallinger; Vasileios Anagnostopoulos; Dimitrios I. Kosmopoulos; Athanasios Voulodimos; Constantinos Lalos; Nikolaos D. Doulamis; Theodora A. Varvarigou; Rolando Palma Zelada; Ignacio Jubert Soler; Severin Stalder; Luc Van Gool; Lee Middleton; Zoheir Sabeur; Banafshe Arbab-Zavar; John N. Carter; Mark S. Nixon
This paper describes a tool chain for monitoring complex workflows. Statistics obtained from automatic workflow monitoring in a car assembly environment assist in improving industrial safety and process quality. To this end, we propose automatic detection and tracking of humans and their activity in multiple networked cameras. The described tools offer human operators retrospective analysis of a huge amount of pre-recorded and analyzed footage from multiple cameras in order to get a comprehensive overview of the workflows. Furthermore, the tools help technical administrators in adjusting algorithms by letting the user correct detections (for relevance feedback) and ground truth for evaluation. Another important feature of the tool chain is the capability to inform the employees about potentially risky conditions using the tool for automatic detection of unusual scenes.