Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iain A. Matthews is active.

Publication


Featured researches published by Iain A. Matthews.


Speech Communication | 2004

Near-videorealistic synthetic talking faces: implementation and evaluation

Barry-John Theobald; Ja Bangham; Iain A. Matthews; Gavin C. Cawley

The application of two-dimensional (2D) shape and appearance models to the problem of creating realistic synthetic talking faces is presented. A sample-based approach is adopted, where the face of a talker articulating a series of phonetically balanced training sentences is mapped to a trajectory in a low-dimensional model-space that has been learnt from the training data. Segments extracted from this trajectory corresponding to the synthesis units (e.g. triphones) are temporally normalised, blended, concatenated and smoothed to form a new trajectory, which is mapped back to the image domain to provide a natural, realistic sequence corresponding to the desired (arbitrary) utterance. The system has undergone early subjective evaluation to determine the naturalness of this synthesis approach. Described are tests to determine the suitability of the parameter smoothing method used to remove discontinuities introduced during synthesis at the concatenation boundaries, and tests used to determine how well long term coarticulation effects are reproduced during synthesis using the adopted unit selection scheme. The system has been extended to animate the face of a 3D virtual character (avatar) and this is also described.


european conference on computer vision | 1998

A Comparison of Active Shape Model and Scale Decomposition Based Features for Visual Speech Recognition

Iain A. Matthews; J. Andrew Bangham; Richard W. Harvey; Stephen J. Cox

Two quite different strategies for characterising mouth shapes for visual speech recognition (lipreading) are compared. The first strategy extracts the parameters required to fit an active shape model (ASM) to the outline of the lips. The second uses a feature derived from a one-dimensional multiscale spatial analysis (MSA) of the mouth region using a new processor derived from mathematical morphology and median filtering. With multispeaker trials, using image data only, the accuracy is 45% using MSA and 19% using ASM on a letters database. A digits database is simpler with accuracies of 77% and 77% respectively. These scores are significant since separate work has demonstrated that even quite low recognition accuracies in the vision channel can be combined with the audio system to give improved composite performance [16].


computer vision and pattern recognition | 1997

Lip reading from scale-space measurements

Richard W. Harvey; Iain A. Matthews; J.A. Bangham; Stephen J. Cox

Systems that attempt to recover the spoken word from image sequences usually require complicated models of the mouth and its motions. Here we describe a new approach based on a fast mathematical morphology transform called the sieve. We form statistics of scale measurements in one and two dimensions and these are used as a feature vector for standard Hidden Markov Models (HMMs).


international conference on acoustics, speech, and signal processing | 2003

Near-videorealistic synthetic visual speech using non-rigid appearance models

Barry-John Theobald; Gavin C. Cawley; Iain A. Matthews; Ja Bangham

We present work towards videorealistic synthetic visual speech using non-rigid appearance models. These models are used to track a talking face enunciating a set of training sentences. The resultant parameter trajectories are used in a concatenative synthesis scheme, where samples of original data are extracted from a corpus and concatenated to form new unseen sequences. Here we explore the effect on the synthesiser output of blending several synthesis units considered similar to the desired unit. We present preliminary subjective and objective results used to judge the realism of the system.


international conference on acoustics, speech, and signal processing | 2002

Towards video realistic synthetic visual speech

Barry-John Theobald; J. Andrew Bangham; Iain A. Matthews; Gavin C. Cawley

In this paper we present initial work towards a video-realistic visual speech synthesiser based on statistical models of shape and appearance. A synthesised image sequence corresponding to an utterance is formed by concatenation of synthesis units (in this case phonemes) from a pre-recorded corpus of training data. A smoothing spline is applied to the concatenated parameters to ensure smooth transitions between frames and the resultant parameters applied to the model—early results look promising.


intelligent virtual agents | 2017

Predicting Head Pose in Dyadic Conversation

David Greenwood; Stephen D. Laycock; Iain A. Matthews

Natural movement plays a significant role in realistic speech animation. Numerous studies have demonstrated the contribution visual cues make to the degree we, as human observers, find an animation acceptable. Rigid head motion is one visual mode that universally co-occurs with speech, and so it is a reasonable strategy to seek features from the speech mode to predict the head pose. Several previous authors have shown that prediction is possible, but experiments are typically confined to rigidly produced dialogue.


AVSP | 1997

COMBINING NOISE COMPENSATION WITH VISUAL INFORMATION IN SPEECH RECOGNITION

Stephen J. Cox; Iain A. Matthews; J. Andrew Bangham


AVSP | 1998

Lipreading Using Shape, Shading and Scale.

Iain A. Matthews; Timothy F. Cootes; Stephen J. Cox; Richard W. Harvey; J. Andrew Bangham


european signal processing conference | 1998

Nonlinear scale decomposition based features for visual speech recognition

Iain A. Matthews; J. Andrew Bangham; Richard W. Harvey; Stephen J. Cox


AVSP | 2001

Visual speech synthesis using statistical models of shape and appearance

Barry-John Theobald; J. Andrew Bangham; Iain A. Matthews; Gavin C. Cawley

Collaboration


Dive into the Iain A. Matthews's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gavin C. Cawley

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

Stephen J. Cox

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

Ja Bangham

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

Sarah Taylor

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

Barry Theobald

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

David Greenwood

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

J.A. Bangham

University of East Anglia

View shared research outputs
Researchain Logo
Decentralizing Knowledge