Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Colin J. Dalton is active.

Publication


Featured researches published by Colin J. Dalton.


Image and Vision Computing | 2004

Practical generation of video textures using the auto-regressive process

Neill W. Campbell; Colin J. Dalton; David P. Gibson; Dj Oziem; Barry T. Thomas

Abstract Recently, there have been several attempts at creating ‘video textures’, that is, synthesising new (potentially infinitely long) video clips based on existing ones. One method for achieving this is to transform each frame of the video into an eigenspace using Principal Components Analysis so that the original sequence can be viewed as a signature through a low-dimensional space. A new sequence can be generated by moving through this space and creating ‘similar’ signatures. These signatures may be derived using an auto-regressive process (ARP). Such an ARP assumes that the signature has Gaussian statistics. For many sequences this assumption is valid, however, some sequences are strongly non-linearly correlated, in which case their statistical properties are non-Gaussian. We examine two methods by which such non-linearities may be overcome. The first is by modelling the non-linearity automatically using a spline, and the second using a combined appearance model. New video sequences created using these approaches contain images never present in the original sequence and appear very convincing.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2007

A system for the capture and synthesis of insect motion

David P. Gibson; Dj Oziem; Colin J. Dalton; Neill W. Campbell

We present an integrated system that enables the capture and synthesis of 3D motions of small scale dynamic creatures, typically insects and arachnids, in order to drive computer generated models. The system consists of a number of stages, initially, the acquisition of a multi-view calibration scene and synchronised video footage of a subject performing some action is carried out. A user guided labelling process, that can be semi-automated using tracking techniques and a 3D point generating algorithm, then enables a full metric calibration and captures the motions of specific points on the subject. The 3D motions extracted, which often come from a limited number of frames of the original footage, are then extended to generate potentially infinitely long, characteristic motion sequences for multiple similar subjects. Finally a novel path following algorithm is used to find optimal path along with coherent motion for synthetic subjects.


british machine vision conference | 2000

Extraction of Motion Data from Image Sequences to Assist Animators

David P. Gibson; Neill W. Campbell; Colin J. Dalton; Barry T. Thomas

We describe a system which is designed to assist animators in extracting high-level information from sequences of images. The system is not meant to replace animators, but to be a tool to assist them in creating the first ‘roughcut’ of a sequence quickly and easily. Using the system, short animations have been created in a very short space of time. We show that the method of principal components analysis followed by a neural network learning phase is capable of motion tracking (even through occlusion), feature-extraction and gait classification. We quantify the results, and demonstrate the system tracking horses, birds and actors in a film. We demonstrate a system that is powerful, flexible and, above all, easy for non-specialists to use.


computer graphics international | 2004

Combining sampling and autoregression for motion synthesis

Dj Oziem; Neill W. Campbell; Colin J. Dalton; David P. Gibson; Barry T. Thomas

We present a novel approach to motion synthesis. It is shown that by splitting sequences into segments new sequences can be created with a similar look and feel to the original. Copying segments of the original data generates a sequence which maintains detailed characteristics. By modelling each segment using an autoregressive process we can introduce new segments and therefore unseen motions. These statistical models allow a potentially infinite number of new segments to be generated. We show that this system can model complicated nonstationary sequences which a single ARP is unable to do


international conference on pattern recognition | 2000

Visual extraction of motion-based information from image sequences

David P. Gibson; Neill W. Campbell; Colin J. Dalton; Barry T. Thomas

We describe a system which is designed to assist in extracting high-level information from sets or sequences of images. We show that the method of principal components analysis followed by a neural network learning phase is capable of feature extraction or motion tracking, even through occlusion. Given a minimum amount of user direction for the learning phase, a wide range of features can be automatically extracted. Features discussed in this paper include information associated with human head motions and a birds wings during take-off. We have quantified the results, for instance showing that with only 25 out of 424 frames of hand labelled information a system to track a persons nose can be trained almost as accurately as a human attempting the same task. We demonstrate a system that is powerful, flexible and, above all, easy for nonspecialists to use.


computer vision computer graphics collaboration techniques | 2011

Facial movement based recognition

Alexander Davies; Carl Henrik Ek; Colin J. Dalton; Neill W. Campbell

The modelling and understanding of the facial dynamics of individuals is crucial to achieving higher levels of realistic facial animation. We address the recognition of individuals through modelling the facial motions of several subjects. Modelling facial motion comes with numerous challenges including accurate and robust tracking of facial movement, high dimensional data processing and non-linear spatial-temporal structural motion. We present a novel framework which addresses these problems through the use of video-specific Active Appearance Models (AAM) and Gaussian Process Latent Variable Models (GP-LVM). Our experiments and results qualitatively and quantitatively demonstrate the frameworks ability to successfully differentiate individuals by temporally modelling appearance invariant facial motion. Thus supporting the proposition that a facial activity model may assist in the areas of motion retargeting, motion synthesis and experimental psychology.


international conference on computer graphics and interactive techniques | 2004

Statistical synthesis of facial expressions for the portrayal of emotion

Lisa Gralewski; Neill W. Campbell; Barry T. Thomas; Colin J. Dalton; David P. Gibson


symposium on computer animation | 2005

Capture and synthesis of insect motion

David P. Gibson; Dj Oziem; Colin J. Dalton; Neill W. Campbell


international conference on computer graphics and interactive techniques | 2003

Varying rendering fidelity by exploiting human change blindness

Kirsten Cater; Alan Chalmers; Colin J. Dalton


british machine vision conference | 2002

Practical Generation of Video Textures using the Auto-Regressive Process

Neill W. Campbell; Colin J. Dalton; David P. Gibson; Barry T. Thomas

Collaboration


Dive into the Colin J. Dalton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dj Oziem

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Henrik Ek

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge