Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liam F. Ellis is active.

Publication


Featured researches published by Liam F. Ellis.


computer vision and pattern recognition | 2013

Discriminative Subspace Clustering

Vasileios Zografos; Liam F. Ellis; Rudolf Mester

We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.


International Journal of Computer Vision | 2011

Linear Regression and Adaptive Appearance Models for Fast Simultaneous Modelling and Tracking

Liam F. Ellis; Nicholas Dowson; Jiri Matas; Richard Bowden

This work proposes an approach to tracking by regression that uses no hard-coded models and no offline learning stage. The Linear Predictor (LP) tracker has been shown to be highly computationally efficient, resulting in fast tracking. Regression tracking techniques tend to require offline learning to learn suitable regression functions. This work removes the need for offline learning and therefore increases the applicability of the technique. The online-LP tracker can simply be seeded with an initial target location, akin to the ubiquitous Lucas-Kanade algorithm that tracks by registering an image template via minimisation.A fundamental issue for all trackers is the representation of the target appearance and how this representation is able to adapt to changes in target appearance over time. The two proposed methods, LP-SMAT and LP-MED, demonstrate the ability to adapt to large appearance variations by incrementally building an appearance model that identifies modes or aspects of the target appearance and associates these aspects to the Linear Predictor trackers to which they are best suited. Experiments comparing and evaluating regression and registration techniques are presented along with performance evaluations favourably comparing the proposed tracker and appearance model learning methods to other state of the art simultaneous modelling and tracking approaches.


international conference on computer vision | 2007

Linear Predictors for Fast Simultaneous Modeling and Tracking

Liam F. Ellis; Nicholas Dowson; Jiri Matas; Richard Bowden

An approach for fast tracking of arbitrary image features with no prior model and no offline learning stage is presented. Fast tracking is achieved using banks of linear displacement predictors learnt online. A multi-modal appearance model is also learnt on-the-fly that facilitates the selection of subsets of predictors suitable for prediction in the next frame. The approach is demonstrated in real-time on a number of challenging video sequences and experimentally compared to other simultaneous modeling and tracking approaches with favourable results.


2013 IEEE Workshop on Robot Vision (WORV) | 2013

Autonomous navigation and sign detector learning

Liam F. Ellis; Nicolas Pugeault; Kristoffer Öfjäll; Johan Hedborg; Richard Bowden; Michael Felsberg

This paper presents an autonomous robotic system that incorporates novel Computer Vision, Machine Learning and Data Mining algorithms in order to learn to navigate and discover important visual entities. This is achieved within a Learning from Demonstration (LfD) framework, where policies are derived from example state-to-action mappings. For autonomous navigation, a mapping is learnt from holistic image features (GIST) onto control parameters using Random Forest regression. Additionally, visual entities (road signs e.g. STOP sign) that are strongly associated to autonomously discovered modes of action (e.g. stopping behaviour) are discovered through a novel Percept-Action Mining methodology. The resulting sign detector is learnt without any supervision (no image labeling or bounding box annotations are used). The complete system is demonstrated on a fully autonomous robotic platform, featuring a single camera mounted on a standard remote control car. The robot carries a PC laptop, that performs all the processing on board and in real-time.


international conference on computer vision | 2010

Sparse motion segmentation using multiple six-point consistencies

Vasileios Zografos; Klas Nordberg; Liam F. Ellis

We present a method for segmenting an arbitrary number of moving objects in image sequences using the geometry of 6 points in 2D to infer motion consistency. The method has been evaluated on the Hopkins 155 database and surpasses current state-of-the-art methods such as SSC, both in terms of overall performance on two and three motions but also in terms of maximum errors. The method works by finding initial clusters in the spatial domain, and then classifying each remaining point as belonging to the cluster that minimizes a motion consistency score. In contrast to most other motion segmentation methods that are based on an affine camera model, the proposed method is fully projective.


british machine vision conference | 2008

Online Learning and Partitioning of Linear Displacement Predictors for Tracking

Liam F. Ellis; Jiri Matas; Richard Bowden

A novel approach to learning and tracking arbitrary image features is presented. Tracking is tackled by learning the mapping from image intensity differences to displacements. Linear regression is used, resulting in low computational cost. An appearance model of the target is built on-the-fly by clustering sub-sampled image templates. The medoidshift algorithm is used to cluster the templates thus identifying various modes or aspects of the target appearance, each mode is associated to the most suitable set of linear predictors allowing piecewise linear regression from image intensity differences to warp updates. Despite no hard-coding or offline learning, excellent results are shown on three publicly available video sequences and comparisons with related approaches made.


international conference on image analysis and processing | 2005

Unsupervised symbol grounding and cognitive bootstrapping in cognitive vision

Richard Bowden; Liam F. Ellis; Josef Kittler; Mikhail Shevchenko; David Windridge

In conventional computer vision systems symbol grounding is invariably established via supervised learning. We investigate unsupervised symbol grounding mechanisms that rely on perception action coupling. The mechanisms involve unsupervised clustering of observed actions and percepts. Their association gives rise to behaviours that emulate human action. The capability of the system is demonstrated on the problem of mimicking shape puzzle solving. It is argued that the same mechanisms support unsupervised cognitive bootstrapping in cognitive vision.


international conference on computer vision | 2005

A generalised exemplar approach to modeling perception action coupling

Liam F. Ellis; Richard Bowden

We present a framework for autonomous behaviour in vision based artificial cognitive systems by imitation through coupled percept-action (stimulus and response) exemplars. Attributed Relational Graphs (ARGs) are used as a symbolic representation of scene information (percepts). A measure of similarity between ARGs is implemented with the use of a graph isomorphism algorithm and is used to hierarchically group the percepts. By hierarchically grouping percept exemplars into progressively more general models coupled to progressively more general Gaussian action models, we attempt to model the percept space and create a direct mapping to associated actions. The system is built on a simulated shape sorter puzzle that represents a robust vision system. Spatio temporal hypothesis exploration is performed ef- ficiently in a Bayesian framework using a particle filter to propagate game play over time.


asian conference on computer vision | 2012

Online learning for fast segmentation of moving objects

Liam F. Ellis; Vasileios Zografos


Archive | 2007

Learning Responses to Visual Stimuli: A Generic Approach

Liam F. Ellis; Richard Bowden

Collaboration


Dive into the Liam F. Ellis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiri Matas

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicholas Dowson

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge