Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alistair Sutherland is active.

Publication


Featured researches published by Alistair Sutherland.


Pattern Recognition Letters | 2009

Modelling and segmenting subunits for sign language recognition based on hand motion analysis

Junwei Han; George Awad; Alistair Sutherland

Modelling and segmenting subunits is one of the important topics in sign language study. Many scholars have proposed the functional definition to subunits from the view of linguistics while the problem of efficiently implementing it using computer vision techniques is a challenge. On the other hand, a number of subunit segmentation work has been investigated for the task of vision-based sign language recognition whereas their subunits either somewhat lack the linguistic support or are improper. In this paper, we attempt to define and segment subunits using computer vision techniques, which also can be basically explained by sign language linguistics. A subunit is firstly defined as one continuous visual hand action in time and space, which comprises a series of interrelated consecutive frames. Then, a simple but efficient solution is developed to detect the subunit boundary using hand motion discontinuity. Finally, temporal clustering by dynamic time warping is adopted to merge similar segments and refine the results. The presented work does not need prior knowledge of the types of signs or number of subunits and is more robust to signer behaviour variation. Furthermore, it correlates highly with the definition of syllables in sign language while sharing characteristics of syllables in spoken languages. A set of comprehensive experiments on real-world signing videos demonstrates the effectiveness of the proposed model.


international symposium on visual computing | 2006

Real time hand gesture recognition including hand segmentation and tracking

Thomas Coogan; George Awad; Junwei Han; Alistair Sutherland

In this paper we present a system that performs automatic gesture recognition. The system consists of two main components: (i) A unified technique for segmentation and tracking of face and hands using a skin detection algorithm along with handling occlusion between skin objects to keep track of the status of the occluded parts. This is realized by combining 3 useful features, namely, color, motion and position. (ii) A static and dynamic gesture recognition system. Static gesture recognition is achieved using a robust hand shape classification, based on PCA subspaces, that is invariant to scale along with small translation and rotation transformations. Combining hand shape classification with position information and using DHMMs allows us to accomplish dynamic gesture recognition.


international conference on automatic face and gesture recognition | 2006

Automatic Skin Segmentation for Gesture Recognition Combining Region and Support Vector Machine Active Learning

Junwei Han; George M. Award; Alistair Sutherland; Hai Wu

Skin segmentation is the cornerstone of many applications such as gesture recognition, face detection, and objectionable image filtering. In this paper, we attempt to address the skin segmentation problem for gesture recognition. Initially, given a gesture video sequence, a generic skin model is applied to the first couple of frames to automatically collect the training data. Then, an SVM classifier based on active learning is used to identify the skin pixels. Finally, the results are improved by incorporating region segmentation. The proposed algorithm is fully automatic and adaptive to different signers. We have tested our approach on the ECHO database. Comparing with other existing algorithms, our method could achieve better performance


international conference on pattern recognition | 1994

Real-time facial-feature tracking based on matching techniques and its applications

Hiroshi Sako; Mark Whitehouse; Anthony Smith; Alistair Sutherland

This paper describes a method of real-time facial-feature extraction which is based on matching techniques. The method is composed of facial-area extraction and mouth-area extraction using colour histogram matching, and eye-area extraction using template matching. By the combination of these methods, we can realize real-time processing, user-independent recognition and tolerance to changes of the environment. Also, this paper touches on neural networks which can extract characteristics for recognizing the shape of facial parts. The methods were implemented in an experimental image processing system, and we discuss the cases that the system is applied to man-machine interface using facial gesture and to sign language translation.


International Gesture Workshop | 2003

A Dynamic Model for Real-Time Tracking of Hands in Bimanual Movements

Atid Shamaie; Alistair Sutherland

The problem of hand tracking in the presence of occlusion is addressed. In bimanual movements the hands tend to be synchronised effortlessly. Different aspects of this synchronisation are the basis of our research to track the hands. The spatial synchronisation in bimanual movements is modelled by the position and the temporal synchronisation by the velocity and acceleration of each hand. Based on a dynamic model, we introduce algorithms for occlusion detection and hand tracking.


international conference on pattern recognition | 2006

A Unified System for Segmentation and Tracking of Face and Hands in Sign Language Recognition

George Awad; Junwei Han; Alistair Sutherland

This paper presents a unified system for segmentation and tracking of face and hands in a sign language recognition using a single camera. Unlike much related work that uses colour gloves, we detect skin by combining 3 useful features: colour, motion and position. These features together, represent the skin colour pixels that are more likely to be foreground pixels and are within a predicted position range. We extend the previous research in occlusion detection to handle occlusion between any of the skin objects using a Kalman filter based algorithm. The tracking improves the segmentation by reducing the search space and the segmentation enhances the overall tracking process. The algorithm is tested on several video sequences from a standard database and can provide a very low error rate


Image and Vision Computing | 2005

Hand tracking in bimanual movements

Atid Shamaie; Alistair Sutherland

A general hand-tracking algorithm is presented for tracking hands in bimanual movements. The problem is approached from a neuroscience point of view. Using a dynamic model and some motor control phenomena, the movement of the hands during a bimanual movement is recognised. By capturing the hands velocities and recognising the bimanual synchronisation, the hands are tracked and reacquired in different types of movements. This includes tracking hands when they are separated, and reacquiring them at the end of hand-hand occlusion. Different applications are demonstrated including active vision where the camera view direction and position change.


international conference on image processing | 2009

Novel boosting framework for subunit-based sign language recognition

George Awad; Junwei Han; Alistair Sutherland

Recently, a promising research direction has emerged in sign language recognition (SLR) aimed at breaking up signs into manageable subunits. This paper presents a novel SL learning technique based on boosted subunits. Three main contributions distinguish the proposed work from traditional approaches: 1) A novel boosting framework is developed to recognize SL. The learning is based on subunits instead of the whole sign, which is more scalable for the recognition task. 2) Feature selection is performed to learn a small set of discriminative combinations of subunits and SL features. 3) A joint learning strategy is adopted to share subunits across sign classes, which leads to a better performance classifiers. Our experiments show that compared to Dynamic Time Warping (DTW) when applied on the whole sign, our proposed technique gives better results.


applied imagery pattern recognition workshop | 2001

Graph-based matching of occluded hand gestures

Atid Shamaie; Alistair Sutherland

Occlusion is an unavoidable subject in most machine vision areas. Recognition of partially-occluded hand gestures is an important problem. In this paper a new algorithm is proposed for the recognition of occluded and non-occluded hand gestures based on matching the Graphs of gestures in an eigenspace.


international conference on multimodal interfaces | 2015

A Multimodal System for Public Speaking with Real Time Feedback

Fiona Dermody; Alistair Sutherland

We have developed a multimodal prototype for public speaking with real time feedback using the Microsoft Kinect. Effective speaking involves use of gesture, facial expression, posture, voice as well as the spoken word. These modalities combine to give the appearance of self-confidence in the speaker. This initial prototype detects body pose, facial expressions and voice. Visual and text feedback is displayed in real time to the user using a video panel, icon panel and text feedback panel. The user can also set and view elapsed time during their speaking performance. Real time feedback is displayed on gaze direction, body pose and gesture, vocal tonality, vocal dysfluencies and speaking rate.

Collaboration


Dive into the Alistair Sutherland's collaboration.

Top Co-Authors

Avatar

Junwei Han

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Awad

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dahai Yu

Dublin City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge