Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fred Stentiford is active.

Publication


Featured researches published by Fred Stentiford.


conference on image and video retrieval | 2007

Video copy detection: a comparative study

Julien Law-To; Li Chen; Alexis Joly; Ivan Laptev; Olivier Buisson; Valerie Gouet-Brunet; Nozha Boujemaa; Fred Stentiford

This paper presents a comparative study of methods for video copy detection. Different state-of-the-art techniques, using various kinds of descriptors and voting functions, are described: global video descriptors, based on spatial and temporal features; local descriptors based on spatial, temporal as well as spatio-temporal information. Robust voting functions is adapted to these techniques to enhance their performance and to compare them. Then, a dedicated framework for evaluating these systems is proposed. All the techniques are tested and compared within the same framework, by evaluating their robustness under single and mixed image transformations, as well as for different lengths of video segments. We discuss the performance of each approach according to the transformations and the applications considered. Local methods demonstrate their superior performance over the global ones, when detecting video copies subjected to various transformations.


Pattern Recognition Letters | 2008

Video sequence matching based on temporal ordinal measurement

Li Chen; Fred Stentiford

This paper proposes a novel video sequence matching method based on temporal ordinal measurements. Each frame is divided into a grid and corresponding grids along a time series are sorted in an ordinal ranking sequence, which gives a global and local description of temporal variation. A video sequence matching means not only finding which video a query belongs to, but also a precise temporal localization. Robustness and discriminability are two important issues of video sequence matching. A quantitative method is also presented to measure the robustness and discriminability attributes of the matching methods. Experiments are conducted on a BBC open news archive with a comparison of several methods.


Storage and Retrieval for Image and Video Databases | 2003

Attention-based image similarity measure with application to content-based information retrieval

Fred Stentiford

Whilst storage and capture technologies are able to cope with huge numbers of images, image retrieval is in danger of rendering many repositories valueless because of the difficulty of access. This paper proposes a similarity measure that imposes only very weak assumptions on the nature of the features used in the recognition process. This approach does not make use of a pre-defined set of feature measurements which are extracted from a query image and used to match those from database images, but instead generates features on a trial and error basis during the calculation of the similarity measure. This has the significant advantage that features that determine similarity can match whatever image property is important in a particular region whether it be a shape, a texture, a colour or a combination of all three. It means that effort is expended searching for the best feature for the region rather than expecting that a fixed feature set will perform optimally over the whole area of an image and over every image in a database. The similarity measure is evaluated on a problem of distinguishing similar shapes in sets of black and white symbols.


electronic imaging | 2006

Using Context and Similarity for Face and Location Identification

Marc Davis; Michael Smith; Fred Stentiford; Adetokunbo Bamidele; John F. Canny; Nathan Good; Simon P. King; Rajkumar Janakiraman

This paper describes a new approach to the automatic detection of human faces and places depicted in photographs taken on cameraphones. Cameraphones offer a unique opportunity to pursue new approaches to media analysis and management: namely to combine the analysis of automatically gathered contextual metadata with media content analysis to fundamentally improve image content recognition and retrieval. Current approaches to content-based image analysis are not sufficient to enable retrieval of cameraphone photos by high-level semantic concepts, such as who is in the photo or what the photo is actually depicting. In this paper, new methods for determining image similarity are combined with analysis of automatically acquired contextual metadata to substantially improve the performance of face and place recognition algorithms. For faces, we apply Sparse-Factor Analysis (SFA) to both the automatically captured contextual metadata and the results of PCA (Principal Components Analysis) of the photo content to achieve a 60% face recognition accuracy of people depicted in our database of photos, which is 40% better than media analysis alone. For location, grouping visually similar photos using a model of Cognitive Visual Attention (CVA) in conjunction with contextual metadata analysis yields a significant improvement over color histogram and CVA methods alone. We achieve an improvement in location retrieval precision from 30% precision for color histogram and CVA image analysis, to 55% precision using contextual metadata alone, to 67% precision achieved by combining contextual metadata with CVA image analysis. The combination of context and content analysis produces results that can indicate the faces and places depicted in cameraphone photos significantly better than image analysis or context analysis alone. We believe these results indicate the possibilities of a new context-aware paradigm for image analysis.


Bt Technology Journal | 2004

An Attention-Based Approach to Content-Based Image Retrieval

A. Bamidele; Fred Stentiford; J Morphett

Mark Weisers vision that ubiquitous computing will overcome the problem of information overload by embedding computation in the environment is on the verge of becoming a reality. Nevertheless todays technology is now capable of handling many different forms of multimedia that pervade our lives and as a result is creating a healthy demand for new content management and retrieval services. This demand is everywhere; it is coming from the mobile videophone owners, the digital camera owners, the entertainment industry, medicine, surveillance, the military, and virtually every library and museum in the world where multimedia assets are lying unknown, unseen and unused.The volume of visual data in the world is increasing exponentially through the use of digital camcorders and cameras in the mass market. These are the modern day consumer equivalents of ubiquitous computers, and, although storage space is in plentiful supply, access and retrieval remain a severe bottle-neck both for the home user and for industry. This paper describes an approach, which makes use of a visual attention model together with a similarity measure, to automatically identify salient visual material and generate searchable metadata that associates related items in a database. Such a system for content classification and access will be of great use in current and future pervasive environments where static and mobile content retrieval of visual imagery is required.


Pattern Recognition | 2007

Attention-based similarity

Fred Stentiford

A similarity measure is described that does not require the prior specification of features or the need for training sets of representative data. Instead large numbers of features are generated as part of the similarity calculation and the extent to which features can be found to be common to pairs of patterns determines the measure of their similarity. Emphasis is given to salient image regions in this process and it is shown that the parameters of invariant transforms may be extracted from the statistics of matching features and used to focus the similarity calculation. Some results are shown on MPEG-7 shape data and discussed in the paper.


international conference on pattern recognition | 2005

Attention based facial symmetry detection

Fred Stentiford

Symmetry is a fundamental structure that is found to some extent in all images. It is thought to be an important factor in the human visual system for obtaining understanding and extracting semantics from visual material. This paper describes a method of detecting axes of reflective symmetry in faces that does not require prior assumptions about the image being analysed. The approach is derived from earlier work on visual attention that identifies salient regions and translational symmetries.


international conference on pattern recognition | 2004

A visual attention estimator applied to image subject enhancement and colour and grey level compression

Fred Stentiford

Image segmentation technology has immediate application to compression where image regions can be identified and economically coded for storage or transmission. Normally segmentation identifies regions that are uniform and homogeneous with respect to some characteristic such as colour or texture and ideally these regions coincide with image objects and thereby offer the potential of huge compression ratios. This paper proposes a technique for colour variability reduction that only affects the background material and leaves perceptually important areas unchanged.


international conference on image processing | 2006

Attention-Based Vanishing Point Detection

Fred Stentiford

Perspective is a fundamental structure that is found to some extent in most images that reflect 3D structure. It is thought to be an important factor in the human visual system for obtaining understanding and extracting semantics from visual material. This paper describes a method of detecting vanishing points in images that does not require prior assumptions about the image being analysed. It enables 3D information to be inferred from 2D images. The approach is derived from earlier work on visual attention that identifies salient regions and translational symmetries.


eye tracking research & application | 2006

An eye tracking interface for image search

Oyewole Oyekoya; Fred Stentiford

Eye tracking presents an adaptive approach that can capture the users current needs and tailor the retrieval accordingly. Applying eye tracking to image retrieval requires that new strategies be devised that can use visual and algorithmic data to obtain natural and rapid retrieval of images. Recent work showed that the eye is faster than the mouse as a source of visual input in a target image identification task [Oyekoya and Stentiford 2005]. We explore the viability of using the eye to drive an image retrieval interface. In a visual search task, users are asked to find a target image in a database and the number of steps to the target image are counted. It is reasonable to believe that users will look at the objects in which they are interested during a search [Oyekoya and Stentiford 2004] and this provides the machine with the necessary information to retrieve a succession of plausible candidate images for the user.

Collaboration


Dive into the Fred Stentiford's collaboration.

Top Co-Authors

Avatar

Oyewole Oyekoya

University College London

View shared research outputs
Top Co-Authors

Avatar

Shijie Zhang

University College London

View shared research outputs
Top Co-Authors

Avatar

Li Chen

University College London

View shared research outputs
Top Co-Authors

Avatar

A. Bamidele

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Schmidt

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge