William W. Abbott
Imperial College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William W. Abbott.
international ieee/embs conference on neural engineering | 2015
Sofia Ira Ktena; William W. Abbott; A. Aldo Faisal
The importance of ensuring user safety throughout the training and evaluation process of brain-machine interfaces is not to be neglected. In this study, a virtual reality software system was built with the intention to create a safe environment, where the performance of wheelchair control interfaces could be tested and compared. We use this to evaluate our eye tracking input methodology, a promising solution for hands-free wheelchair navigation, because of the abundance of control commands that it offers and its intuitive nature. Natural eye movements have long been considered to reflect cognitive processes and are highly correlated with user intentions. Therefore, the sequence of gaze locations during navigation is recorded and analyzed, in order to search and unveil patterns in saccadic movements. Moreover, this study compares different eye-based solutions that have previously been implemented, and proposes a new, more natural approach. The preliminary results on N = 6 healthy subjects indicate that the proposed free-view solution leads to 18.4% faster completion of the task (440 sec) benchmarked against a naive free-view approach.
international ieee/embs conference on neural engineering | 2013
William W. Abbott; Alan Zucconi; A. Aldo Faisal
Current BMI technology requires significant development to enable patients with severe motor disabilities to obtain vital degrees of freedom in everyday life. State-of-the-art systems are expensive, require long training times and suffer from low patient uptake. We propose a non-invasive and ultra-low cost alternative - action intention decoding from 3D gaze signals. Building on our previous work, we present here a large field study (N=176 subjects) to understand how efficient our approach is at allowing subjects, from first use, to operate our BMI on the Pong BMI benchmark task. Within the first 30 seconds of first time use, the majority of subjects were able to play the arcade game pong against a computer. Subjects made on average 8.5±7.2 ball returns compared to the chance level of 2.6±2.5 obtained without input (mean±SD) and almost 5% even managed to beat the computer, despite having never used their eye-movements as a control input. This performance was achieved with members of the public at a scientific engagement event, not in stringent lab conditions and with minimal system calibration (30 s) and negligible user control learning (5 s countdown before ball released). This demonstrates the intuitive nature of gaze control and thus the clinical applicability of our approach.
ieee international conference on biomedical robotics and biomechatronics | 2016
Sabine Dziemian; William W. Abbott; A. Aldo Faisal
Eye tracking is a powerful mean for assistive technologies for people with movement disorders, paralysis and amputees. We present a highly intuitive eye tracking-controlled robot arm operating in 3-dimensional space based on the users gaze target point that enables tele-writing and drawing. The usability and intuitive usage was assessed by a “tele” writing experiment with 8 subjects that learned to operate the system within minutes of first time use. These subjects were naive to the system and the task and had to write three letters on a white board with a white board pen attached to the robot arms endpoint. The instructions are to imagine they were writing text with the pen and look where the pen would be going, they had to write the letters as fast and as accurate as possible, given a letter size template. Subjects were able to perform the task with facility and accuracy, and movements of the arm did not interfere with subjects ability to control their visual attention so as to enable smooth writing. On the basis of five consecutive trials there was a significant decrease in the total time used and the total number of commands sent to move the robot arm from the first to the second trial but no further improvement thereafter, suggesting that within writing 6 letters subjects had mastered the ability to control the system. Our work demonstrates that eye tracking is a powerful means to control robot arms in closed-loop and real-time, outperforming other invasive and non-invasive approaches to Brain-Machine-Interfaces in terms of calibration time (<;2 minutes), training time (<;10 minutes), interface technology costs. We suggests that gaze-based decoding of action intention may well become one of the most efficient ways to interface with robotic actuators - i.e. Brain-Robot-Interfaces - and become useful beyond paralysed and amputee users also for the general teleoperation of robotic and exoskeleton in human augmentation.
international conference on robotics and automation | 2016
Pablo M. Tostado; William W. Abbott; A. Aldo Faisal
Eye movements are closely related to motor actions, and hence can be used to infer motor intentions. Additionally, eye movements are in some cases the only means of communication and interaction with the environment for paralysed and impaired patients with severe motor deficiencies. Despite this, eye-tracking technology still has a very limited use as a human-robot control interface and its applicability is highly restricted to 2D simple tasks that operate on screen based interfaces and do not suffice for natural physical interaction with the environment. We propose that decoding the gaze position in 3D space rather than in 2D results into a much richer “spatial cursor” signal that allows users to perform everyday tasks such as grasping and moving objects via gaze-based robotic teleoperation. Eye tracking in 3D calibration is usually slow - we demonstrate here that by using a full 3D trajectory for system calibration generated by a robotic arm rather than a simple grid of discrete points, gaze calibration in the 3 dimensions can be successfully achieved in short time and with high accuracy. We perform the non-linear regression from eye-image to 3D-end point using Gaussian Process regressors, which allows us to handle uncertainty in end-point estimates gracefully. Our telerobotic system uses a multi-joint robot arm with a gripper and is integrated with our in-house “GT3D” binocular eye tracker. This prototype system has been evaluated and assessed in a test environment with 7 users, yielding gaze-estimation errors of less than 1cm in the horizontal, vertical and depth dimensions, and less than 2cm in the overall 3D Euclidean space. Users reported intuitive, low-cognitive load, control of the system right from their first trial and were straightaway able to simply look at an object and command through a wink to “grasp this” object with the robot gripper.
BMC Neuroscience | 2011
William W. Abbott; A. Aldo Faisal
Advancement in Brain machine interfaces (BMIs) holds the hope to restore vital degrees of independence for patients with high-level neurological disorders, improving their quality of life while reducing their dependency on others [1]. Unfortunately these emerging rehabilitative methods come at considerable clinical and post-clinical operational costs, beyond the means of the majority of patients [1]. Here we consider an alternative: eye movements. Eye movements provide a feasible alternative BMI basis as they tend to be spared degradation by neurological disorders such as Muscular dystrophy, high-level spinal injuries and multiple sclerosis because the Occululomotor system is innervated from the midbrain rather than the spinal column. Eye tracking and gaze based interaction is a long established field, however cost, accuracy and effective system integration have meant these systems are not widely used. We have developed an ultra-low cost 3D gaze-tracker based on mass market video game equipment that matches the performance of commercial systems 500 times as expensive. We developed a calibration method for 3D eye gaze, that requires only a standard computer monitor as opposed to 3D equipment, no information on eye geometry, and allows free head movement following calibration. Our method enables us to track eye movements off the computer screen, e.g. to drive a wheel chair or the end-point of a prosthetic arm. Unlike other BMIs, training is virtually unnecessary as the control intention can be taken from natural eye movements. By tracking both eyes, a significantly higher information rate can be obtained both by making 2D gaze estimates more accurate and by adding another spatial dimension in which to make commands. Our ultra-low cost 3D eye tracking system operates at below 50 USD material cost with 0.5-1 degree resolution at 120 Hz sample rate, we achieved this by reverse engineering mass-marketed video console components. This high-speed accuracy allows us to drive a mouse cursor at this performance level in real-time at a data rate of over 1100 bits/s, while capturing depth allows us, in theory to drive a continuous control user interface at a multiple of that rate. These information rates are orders of magnitude times higher than other BMI approaches e.g. non-invasive cortical methods (maximum 1.63 bits/sec) such as Electroencephalography (EEG), Magnetoencephalography (MEG), functional Magnetic Resonance imaging (fMRI), and Near Infrared Spectroscopy (NIRS); Cortical Invasive methods (maximum 3.30 bits/sec) such as MEAs and EcOG and non-cortical non-invasive methods (maximum 6.13 bits/sec) such as EMG, Speech Recognition and Mechanical Switches[1,2]. Thus, our delay free, real-time control is suitable to allow paralyzed patients to play live action video games, as we recently demonstrated to the public[3]. Moreover, the novel approach of using 3D point of gaze makes the users interaction with their surroundings more intuitive, without detracting from the eye’s sensory function. A potential example, in a wheelchair context, is that the user simply looks where they would like to drive. With this low cost device and the rich 3D gaze information we provide an extremely intuitive and non-intrusive BMI alternative that can be developed that is economically accessible to patients in the developed and the developing world.
Archive | 2016
P. Rente Lourenço; William W. Abbott; A. Aldo Faisal
Electroencephalography (EEG) is a widely used brain signal recording technique with many uses. The information conveyed in these recordings is a useful tool in the diagnosis of some diseases and disturbances, basic science, as well as in the development of non-invasive Brain-Machine Interfaces (BMI). However, the electrical recording setup comes with two major downsides, a. poor signal-to-noise ratio and b. the vulnerability to any external and internal noise sources. One of the main sources of artefacts is eye movements due to the electric dipole between the cornea and the retina. We have previously proposed that monitoring eye-movements provides a complementary signal for BMIs. Here we propose a novel technique to remove eye-related artefacts from the EEG recordings. We coupled Eye Tracking with EEG allowing us to independently measure when ocular artefact events occur through the eye tracker and thus clean them up in a targeted “supervised” manner instead of using a “blind” artefact clean up correction technique. Three standard methods of artefact correction were applied in an event-driven, supervised manner: 1. Independent Components Analysis (ICA), 2. Wiener Filter and 3. Wavelet Decomposition and compared to “blind” unsupervised ICA clean up. These are standard artefact correction approaches implemented in many toolboxes and experimental EEG systems and could easily be applied by their users in an event-driven manner. Already the qualitative inspection of the clean up traces shows that the simple targeted artefact event-driven clean up outperforms the traditional “blind” clean up approaches. We conclude that this justifies the small extra effort of performing simultaneous eye tracking with any EEG recording to enable simple, but targeted, automatic artefact removal that preserves more of the original signal.
international congress on neurotechnology, electronics and informatics | 2014
P. Rente Lourenço; William W. Abbott; A. Aldo Faisal
Electroencephalograms (EEG) are a widely used brain signal recording technique. The information conveyed in these recordings can be an extremely useful tool in the diagnosis of some diseases and disturbances, as well as in the development of non-invasive Brain-Machine Interfaces (BMI). However, the non-invasive electrical recording setup comes with two major downsides, a. poor signal-to-noise ratio and b. the vulnerability to any external and internal noise sources. One of the main sources of artefacts are eye movements due to the electric dipole between the cornea and the retina. We have previously proposed that monitoring eye-movements provide a complementary signal for BMIs. He we propose a novel technique to remove eye-related artefacts from the EEG recordings. We couple Eye Tracking with EEG allowing us to independently measure when ocular artefact events occur and thus clean them up in a targeted manner instead of using a “blind” artefact clean up correction technique. Three standard methods of artefact correction were applied in an event-driven, supervised manner: 1. Independent Components Analysis (ICA), 2. Wiener Filter and 3. Wavelet Decomposition and compared to “blind” unsupervised ICA clean up. These are standard artefact correction approaches implemented in many toolboxes and experimental EEG systems and could easily be applied by their users in an event-driven manner. Already the qualitative inspection of the clean up traces show that the simple targeted artefact event-driven clean up outperforms the traditional “blind” clean up approaches. We conclude that this justifies the small extra effort of performing simultaneous eye tracking with any EEG recording to enable simple, but targeted, automatic artefact removal that preserves more of the original signal.
Journal of Neural Engineering | 2012
William W. Abbott; Aldo Ahmed Faisal
international ieee/embs conference on neural engineering | 2013
Nigel Sim; Constantinos Gavriel; William W. Abbott; A. Aldo Faisal
Journal of Vision | 2015
William W. Abbott; Andreas A. C. Thomik; A. Aldo Faisal