Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neil Cooke is active.

Publication


Featured researches published by Neil Cooke.


international conference on acoustics, speech, and signal processing | 2008

Gaze-contingent asr for spontaneous, conversational speech: An evaluation

Neil Cooke; Martin J. Russell

There has been little work that attempts to improve the recognition of spontaneous, conversational speech by adding information from a loosely-coupled modality. This study investigated this idea by integrating information from gaze into an ASR system. A probabilistic framework for multimodal recognition was formalised and applied to the specific case of integrating gaze and speech. Gaze-contingent ASR systems were developed from a baseline ASR system by redistributing language model probability mass according to the visual attention. The best performing systems had similar Word Error Rates to the baseline ASR system and showed an increase in keyword spotting accuracy. The key finding was that performance improvements observed were due to increased recognition accuracy for words associated with the visual field but not the current focus of visual attention.


international conference on human-computer interaction | 2013

A Novel Approach for Adaptive EEG Artefact Rejection and EOG Gaze Estimation

Mohammad Reza Haji Samadi; Neil Cooke

An adaptive system for Electroencephalography (EEG) artefact rejection and Electrooculogrum (EOG) gaze estimation is proposed. The system inputs optical gaze information, and accuracy of the EOG gaze classification into an adaptive Independent Component Analysis (ICA) algorithm, for improving EEG source separation. Finally two evaluation methods based on EOG gaze estimation are suggested to assess the performance of the proposed system. The work will be of use to researchers considering using BCI and eye-tracking paradigms in real life applications.


international conference on multimodal interfaces | 2009

Cache-based language model adaptation using visual attention for ASR in meeting scenarios

Neil Cooke; Martin J. Russell

In a typical group meeting involving discussion and collaboration, people look at one another, at shared information resources such as presentation material, and also at nothing in particular. In this work we investigate whether the knowledge of what a person is looking at may improve the performance of Automatic Speech Recognition (ASR). A framework for cache Language Model (LM) adaptation is proposed with the cache based on a persons Visual Attention (VA) sequence. The framework attempts to measure the appropriateness of adaptation from VA sequence characteristics. Evaluation on the AMI Meeting corpus data shows reduced LM perplexity. This work demonstrates the potential for cache-based LM adaptation using VA information in large vocabulary ASR deployed in meeting scenarios.


Virtual Reality | 2013

RORSIM: a warship collision avoidance 3D simulation designed to complement existing Junior Warfare Officer training

Neil Cooke; Robert Stone

Royal Navy Junior Warfare Officers (JWO) undergo a comprehensive training package in order to prepare them to be officers of the watch. One aspect of this training relates to their knowledge of the ‘Rules of the Road’ or ‘COLREGS’; the rules for the manoeuvring and signalling that approaching vessels make in order to avoid collision. The training and assessment exercises undertaken predominantly use non-interactive static materials. These do not exercise the required skill in reconciling information from maritime charts, radar displays and ‘out-of-the-window’ monitoring. Consequently, performance during assessment on the VR-based bridge simulator falls short. This paper describes The Rules of the Road SIMulator (RORSIM)—a proof of concept interactive 3D (i3D) simulator developed to bridge the training gap between classroom teaching and VR bridge simulator assessment. RORSIM’s differentiation and its key functionality in terms of visualisaton, physics/interaction and game mechanics are influenced by the consideration of pedagogical learning models during requirements capture. This capture is formalised by a ‘Training Gap Use Case’—a graphical viewpoint using the Universal Modelling Language which can assist developers in requirements capture and development of i3D tools for existing training programmes. Key functionality, initial JWO feedback and a planned pilot study design are reported.


ieee embs international conference on biomedical and health informatics | 2016

Automatic ERP classification in EEG recordings from task-related independent components

Zohreh Zakeri; Mohammad Reza Haji Samadi; Neil Cooke; Peter Jancovic

The Electroencephalography (EEG) signal contains information about a persons brain activity including the Event-Related Potential (ERP) - an evoked response to a task-related stimulus. EEG is contaminated by artefacts that degrade ERP classification performance. Independent Component Analysis (ICA) is normally employed to decompose EEG into independent components (ICs) associated to artefact and non-artefact sources. Sources identified as artefacts are removed and a cleaned EEG is reconstructed. This paper presents an alternative use of ICA for the EEG signal to extract ERP feature rather than artefact reduction. Average ERP classification accuracy increases by 15%, to 83.9%, on clinical-grade EEG data from 9 participants, when compared to similar approaches with cleaned EEG. Additionally, the proposed method obtained better performance in comparison with the state-of-the-art xDAWN method.


international conference on multimodal interfaces | 2017

Multimodal affect recognition in an interactive gaming environment using eye tracking and speech signals

Ashwaq Alhargan; Neil Cooke; Tareq Binjammaz

This paper presents a multimodal affect recognition system for interactive virtual gaming environments using eye tracking and speech signals, captured in gameplay scenarios that are designed to elicit controlled affective states based on the arousal and valence dimensions. The Support Vector Machine is employed as a classifier to detect these affective states from both modalities. The recognition results reveal that eye tracking is superior to speech in affect detection and that the two modalities are complementary in the application of interactive gaming. This suggests that it is feasible to design an accurate multimodal recognition system to detect players’ affects from the eye tracking and speech modalities in the interactive gaming environment. We emphasise the potential of integrating the proposed multimodal system into game interfaces to enhance interaction and provide an adaptive gaming experience.


Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction | 2010

Mutual information as a variable to differentiate the roles of gaze in the multimodal interface

Neil Cooke; Ao Shen

In natural interaction, gaze assumes a variety of roles that may need distinguishing between in a gaze-contingent interface. Information from other modalities has potential to make distinction easier. In this study, Mutual Information (MI) is proposed as a variable to distinguish gaze roles by using information from both gaze and speech. A pilot experiment is conducted where different gaze behaviours are elicited from people using acoustic noise. Initial results demonstrate that MI distinguishes between gaze roles moreso than gaze characteristics alone. This work shows potential for using MI as a variable to help distinguish gaze roles in multimodal interfaces and highlights the requirement to account for acoustic noise when interpreting gaze in human-machine interactions.


international symposium on information technology | 2008

An eyetracking study of estimation accuracy: Examining cerebellar tumours from Magnetic resonance spectroscopy graphs

Kuryati Kipli; Theodoros N. Arvanitis; Neil Cooke; Lisa M. Harris

Using an eye-tracker, this paper investigates people measurement performance as well as the accuracy of people measures the peaks on the MRS graphs that correspond to chemical quantities and its affect to the algorithm-based diagnosis. The experiment involves three participants each examined 20 MRS graphs to estimate peaks of chemical quantities in the brain for indications of abnormalities associated with Cerebellar Tumours (CT). Then, the status of each MRS results was verified using decision algorithm. During the experiment, measurement of peaks and eye movement of the participants are recorded. Results show that people are making error in estimation and it’s likely to be underestimate or/and overestimate errors. This preliminary investigation provides a proof of concept for use of the eyetracking technology as the basis for expanded CT diagnosis. Nevertheless, it is tempting to speculate on the potential usefulness of these results if verified by larger samples.


eye tracking research & application | 2004

Poster abstract: evaluation of hidden Markov models robustness in uncovering focus of visual attention from noisy eye-tracker data

Neil Cooke; Martin J. Russell; Antje S. Meyer

Eye position, captured via an eye tracker, can uncover the focus of visual attention by classifying eye movements into fixations, pursuit or saccades [Duchowski 2003], with the former two indicating foci of visual attention. Such classification requires all other variability in eye tracking data, from sensor error to other eye movements (such as microsaccades, nystagmus and drifts) to accounted for effectively. The hidden Markov model provides a useful way of uncovering focus of visual attention from eye position when the user undertakes visually oriented tasks, allowing variability in eye tracking data to be modelled as a random variable.


international conference on user modeling, adaptation, and personalization | 2007

Visual Attention in Open Learner Model Presentations: An Eye-Tracking Investigation

Susan Bull; Neil Cooke; Andrew Mabbott

Collaboration


Dive into the Neil Cooke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Stone

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Howes

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Ao Shen

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Antje S. Meyer

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Natan Morar

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Pia Rotshtein

University of Birmingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge