Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hannah Dee is active.

Publication


Featured researches published by Hannah Dee.


machine vision applications | 2008

How close are we to solving the problem of automated visual surveillance?: A review of real-world surveillance, scientific progress and evaluative mechanisms

Hannah Dee; Sergio A. Velastin

The problem of automated visual surveillance has spawned a lively research area, with 2005 seeing three conferences or workshops and special issues of two major journals devoted to the topic. These alone are responsible for somewhere in the region of 240 papers and posters on automated visual surveillance before we begin to count those presented in more general fora. Many of these systems and algorithms perform one small sub-part of the surveillance task, such as motion detection. But even with low level image processing tasks it is often difficult to compare systems on the basis of published results alone. This review paper aims to answer the difficult question “How close are we to developing surveillance related systems which are really useful?” The first section of this paper considers the question of surveillance in the real world: installations, systems and practises. The main body of the paper then considers existing computer vision techniques with an emphasis on higher level processes such as behaviour modelling and event detection. We conclude with a review of the evaluative mechanisms that have grown from within the computer vision community in an attempt to provide some form of robust evaluation and cross-system comparability.


british machine vision conference | 2004

Detecting inexplicable behaviour

Hannah Dee; David C. Hogg

This paper presents a novel approach to the detection of unusual or interesting events in videos involving certain types of intentional behaviour, such as pedestrian scenes. The approach is not based upon a statistical measure of typicality, but upon building an understanding of the way people navigate towards a goal. The activity of agents moving around within the scene is evaluated based upon whether the behaviour in question is consistent with a simple model of goal-directed behaviour and a model of those goals and obstacles known to be in the scene. The advantages of such an approach are multiple: it handles the presence of movable obstacles (for example, parked cars) with ease; trajectories which have never before been presented to the system can be classied as explicable; and the technique as a whole has a prima facie psychological plausibility. A system based upon these principles is demonstrated in two scenes: a car-park, and in a foyer scenario 1 .


Pattern Recognition | 2012

Face recognition using the POEM descriptor

Ngoc-Son Vu; Hannah Dee; Alice Caplier

Real-world face recognition systems require careful balancing of three concerns: computational cost, robustness, and discriminative power. In this paper we describe a new descriptor, POEM (patterns of oriented edge magnitudes), by applying a self-similarity based structure on oriented magnitudes and prove that it addresses all three criteria. Experimental results on the FERET database show that POEM outperforms other descriptors when used with nearest neighbour classifiers. With the LFW database by combining POEM with GMMs and with multi-kernel SVMs, we achieve comparable results to the state of the art. Impressively, POEM is around 20 times faster than Gabor-based methods.


Spatial Cognition and Computation | 2011

The perception and content of cast shadows: an interdisciplinary review

Hannah Dee; Paulo E. Santos

Abstract Recently, psychologists have turned their attention to the study of cast shadows and demonstrated that the human perceptual system values information from shadows very highly in the perception of spatial qualities, sometimes to the detriment of other cues. However with some notable and recent exceptions, computer vision systems treat cast shadows not as signal but as noise. This paper provides a concise yet comprehensive review of the literature on cast shadow perception from across the cognitive sciences, including the theoretical information available, the perception of shadows in human and machine vision, and the ways in which shadows can be used.


international conference on image processing | 2010

Crowd behaviour analysis using histograms of motion direction

Hannah Dee; Alice Caplier

A practical system for the automated analysis of crowded scenes will have to deal with multiple occlusions and tracking failures, in a context in which the cameras may move at any time to point in any direction, at any level of zoom. This paper presents a prototype component of such a system. Much work in crowd modelling assumes that the camera will be static for extended periods of time and that a model of the scene can therefore be learned; we do not make this assumption and instead build a simple representation of motion patterns that is applicable across different views and which learns motion scale rapidly. Our representation is based upon histograms of motion direction alongside an indication of motion speed. These can be used for detecting frames in which behaviour differs from the training set, and also for localisation of where in the image these anomalous events occur. We evaluate this work against five event-detection scenarios from the public PETS2009 crowd behaviour dataset.


international conference on robotics and automation | 2009

Qualitative robot localisation using information from cast shadows

Paulo E. Santos; Hannah Dee; Valquiria Fenelon

Recently, cognitive psychologists and others have turned their attention to the formerly neglected study of shadows, and the information they purvey. These studies show that the human perceptual system values information from shadows very highly, particularly in the perception of depth, even to the detriment of other cues. However with a few notable exceptions, computer vision systems have treated shadows not as signal but as noise. This paper makes a step towards redressing this imbalance by considering the formal representation of shadows. We take one particular aspect of reasoning about shadows, developing the idea that shadows carry information about a fragment of the viewpoint of the light source. We start from the observation that the region on which the shadow is cast is occluded by the caster with respect to the light source and build a qualitative theory about shadows using a region-based spatial formalism about occlusion. Using this spatial formalism and a machine vision system we are able to draw simple conclusions about domain objects and egolocation for a mobile robot.


advanced video and signal based surveillance | 2005

On the feasibility of using a cognitive model to filter surveillance data

Hannah Dee; David C. Hogg

This paper describes a novel approach to the problem of automated visual surveillance. The authors have extended an existing algorithm which uses a cognitive model of navigation to explain behaviour in a surveillance setting. We then take this cognitive model and apply it to the problem of filtering surveillance data: typically, a surveillance or CCTV installation will have a limited number of operatives monitoring a large number of cameras. The proposed system filters upon inexplicability scores, on the grounds that those trajectories which we can explain in terms of simple goals are exactly those trajectories which are uninteresting: it is only those we cannot simply explain which are worth attending to. Initial results are promising, with over 50% of uninteresting trajectories being excluded.


Applied Intelligence | 2013

Reasoning about shadows in a mobile robot environment

Valquiria Fenelon; Paulo E. Santos; Hannah Dee; Fabio Gagliardi Cozman

This paper describes a logic-based formalism for qualitative spatial reasoning with cast shadows (Perceptual Qualitative Relations on Shadows, or PQRS) and presents results of a mobile robot qualitative self-localisation experiment using this formalism. Shadow detection was accomplished by mapping the images from the robot’s monocular colour camera into a HSV colour space and then thresholding on the V dimension. We present results of self-localisation using two methods for obtaining the threshold automatically: in one method the images are segmented according to their grey-scale histograms, in the other, the threshold is set according to a prediction about the robot’s location, based upon a qualitative spatial reasoning theory about shadows. This theory-driven threshold search and the qualitative self-localisation procedure are the main contributions of the present research. To the best of our knowledge this is the first work that uses qualitative spatial representations both to perform robot self-localisation and to calibrate a robot’s interpretation of its perceptual input.


Archive | 2015

Imaging Methods for Phenotyping of Plant Traits

David Rousseau; Hannah Dee; Tony P. Pridmore

This chapter introduces the domain of image analysis, both in general and as applied to the problem of plant phenotyping. Images can be thought of as a measurement tool, and the automated processing of images allows for greater throughput, reliability and repeatability, at all scales of measurement (from microscopic to field level). This domain should be of increasing interest to plant scientists, as the cost of image-based sensors is dropping, and photographing plants on a daily or even minute-by-minute basis is now cost-effective. With such systems there is a possibility of tens of thousands of photographs being recorded, and so the job of analysing these images must now fall to computational methods. In this chapter, we provide an overview of recent work in image analysis for plant science and highlight some of the key techniques from computer vision that have been applied to date to the problem of phenotyping plants. We conclude with a description of the four main challenges for image analysis and plant science: growth, occlusion, evaluation and low-cost sensor vision.


machine vision applications | 2016

Special issue on computer vision and image analysis in plant phenotyping

Hanno Scharr; Hannah Dee; Andrew P. French; Sotirios A. Tsaftaris

Plant phenotyping is the identification of effects on the phenotype (i.e., the plant appearance and behavior) as a result of genotype differences (i.e., differences in the genetic code) and the environment. Previously, the process of taking phenotypic measurements has been laborious, costly, and time-consuming. In recent years, noninvasive, imagingbased methods have become more common. These images are recorded by a range of capture devices from small embedded camera systems to multi-million Euro smart greenhouses, at scales ranging from microscopic images of cells, to entire fields captured by UAV imaging. These images need to be analyzed in a high-throughput, robust, and accurate manner. UN-FAO statistics show that according to current population predictions we will need to achieve a 70% increase in food productivity by 2050, simply to maintain current global agricultural demands. Phenomics—large-scale measurement of plant traits—is the

Collaboration


Dive into the Hannah Dee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paulo E. Santos

Centro Universitário da FEI

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juan Cao

Aberystwyth University

View shared research outputs
Researchain Logo
Decentralizing Knowledge