Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Farquhar is active.

Publication


Featured researches published by Jason Farquhar.


The Journal of Nuclear Medicine | 2008

MRI-Based Attenuation Correction for PET/MRI: A Novel Approach Combining Pattern Recognition and Atlas Registration

Matthias Hofmann; Florian Steinke; Verena Scheel; Guillaume Charpiat; Jason Farquhar; Philip Aschoff; Michael Brady; Bernhard Schölkopf; Bernd J. Pichler

For quantitative PET information, correction of tissue photon attenuation is mandatory. Generally in conventional PET, the attenuation map is obtained from a transmission scan, which uses a rotating radionuclide source, or from the CT scan in a combined PET/CT scanner. In the case of PET/MRI scanners currently under development, insufficient space for the rotating source exists; the attenuation map can be calculated from the MR image instead. This task is challenging because MR intensities correlate with proton densities and tissue-relaxation properties, rather than with attenuation-related mass density. Methods: We used a combination of local pattern recognition and atlas registration, which captures global variation of anatomy, to predict pseudo-CT images from a given MR image. These pseudo-CT images were then used for attenuation correction, as the process would be performed in a PET/CT scanner. Results: For human brain scans, we show on a database of 17 MR/CT image pairs that our method reliably enables estimation of a pseudo-CT image from the MR image alone. On additional datasets of MRI/PET/CT triplets of human brain scans, we compare MRI-based attenuation correction with CT-based correction. Our approach enables PET quantification with a mean error of 3.2% for predefined regions of interest, which we found to be clinically not significant. However, our method is not specific to brain imaging, and we show promising initial results on 1 whole-body animal dataset. Conclusion: This method allows reliable MRI-based attenuation correction for human brain scans. Further work is necessary to validate the method for whole-body imaging.


international conference on machine learning | 2005

The 2005 PASCAL visual object classes challenge

Mark Everingham; Andrew Zisserman; Christopher K. I. Williams; Luc Van Gool; Moray Allan; Christopher M. Bishop; Olivier Chapelle; Navneet Dalal; Thomas Deselaers; Gyuri Dorkó; Stefan Duffner; Jan Eichhorn; Jason Farquhar; Mario Fritz; Christophe Garcia; Thomas L. Griffiths; Frédéric Jurie; Daniel Keysers; Markus Koskela; Jorma Laaksonen; Diane Larlus; Bastian Leibe; Hongying Meng; Hermann Ney; Bernt Schiele; Cordelia Schmid; Edgar Seemann; John Shawe-Taylor; Amos J. Storkey; Sandor Szedmak

The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). Four object classes were selected: motorbikes, bicycles, cars and people. Twelve teams entered the challenge. In this chapter we provide details of the datasets, algorithms used by the teams, evaluation criteria, and results achieved.


Journal of Neural Engineering | 2009

The brain–computer interface cycle

Marcel A. J. van Gerven; Jason Farquhar; Rebecca Schaefer; Rutger Vlek; Jeroen Geuze; Antinus Nijholt; Nick Ramsay; Pim Haselager; Louis Vuurpijl; Stan C. A. M. Gielen; Peter Desain

Brain-computer interfaces (BCIs) have attracted much attention recently, triggered by new scientific progress in understanding brain function and by impressive applications. The aim of this review is to give an overview of the various steps in the BCI cycle, i.e., the loop from the measurement of brain activity, classification of data, feedback to the subject and the effect of feedback on brain activity. In this article we will review the critical steps of the BCI cycle, the present issues and state-of-the-art results. Moreover, we will develop a vision on how recently obtained results may contribute to new insights in neurocognition and, in particular, in the neural representation of perceived stimuli, intended actions and emotions. Now is the right time to explore what can be gained by embracing real-time, online BCI and by adding it to the set of experimental tools already available to the cognitive neuroscientist. We close by pointing out some unresolved issues and present our view on how BCI could become an important new tool for probing human cognition.


Journal of Neural Engineering | 2009

Overlap and refractory effects in a brain–computer interface speller based on the visual P300 event-related potential

Suzanne Martens; N.J. Hill; Jason Farquhar; Bernhard Schölkopf

We reveal the presence of refractory and overlap effects in the event-related potentials in visual P300 speller datasets, and we show their negative impact on the performance of the system. This finding has important implications for how to encode the letters that can be selected for communication. However, we show that such effects are dependent on stimulus parameters: an alternative stimulus type based on apparent motion suffers less from the refractory effects and leads to an improved letter prediction performance.


Journal of Neural Engineering | 2011

P300 audio-visual speller

A. Belitski; Jason Farquhar; Peter Desain

The Farwell and Donchin matrix speller is well known as one of the highest performing brain-computer interfaces (BCIs) currently available. However, its use of visual stimulation limits its applicability to users with normal eyesight. Alternative BCI spelling systems which rely on non-visual stimulation, e.g. auditory or tactile, tend to perform much more poorly and/or can be very difficult to use. In this paper we present a novel extension of the matrix speller, based on flipping the letter matrix, which allows us to use the same interface for visual, auditory or simultaneous visual and auditory stimuli. In this way we aim to allow users to utilize the best available input modality for their situation, that is use visual + auditory for best performance and move smoothly to purely auditory when necessary, e.g. when disease causes the users eyesight to deteriorate. Our results on seven healthy subjects demonstrate the effectiveness of this approach, with our modified visual + auditory stimulation slightly out-performing the classic matrix speller. The purely auditory system performance was lower than for visual stimulation, but comparable to other auditory BCI systems.


Neuroinformatics | 2013

Interactions Between Pre-Processing and Classification Methods for Event-Related-Potential Classification

Jason Farquhar; N.J. Hill

Detecting event related potentials (ERPs) from single trials is critical to the operation of many stimulus-driven brain computer interface (BCI) systems. The low strength of the ERP signal compared to the noise (due to artifacts and BCI irrelevant brain processes) makes this a challenging signal detection problem. Previous work has tended to focus on how best to detect a single ERP type (such as the visual oddball response). However, the underlying ERP detection problem is essentially the same regardless of stimulus modality (e.g. visual or tactile), ERP component (e.g. P300 oddball response, or the error-potential), measurement system or electrode layout. To investigate whether a single ERP detection method might work for a wider range of ERP BCIs we compare detection performance over a large corpus of more than 50 ERP BCI datasets whilst systematically varying the electrode montage, spectral filter, spatial filter and classifier training methods. We identify an interesting interaction between spatial whitening and regularised classification which made detection performance independent of the choice of spectral filter low-pass frequency. Our results show that pipeline consisting of spectral filtering, spatial whitening, and regularised classification gives near maximal performance in all cases. Importantly, this pipeline is simple to implement and completely automatic with no expert feature selection or parameter tuning required. Thus, we recommend this combination as a “best-practice” method for ERP detection problems.


NeuroImage | 2011

Name that tune: decoding music from the listening brain.

Rebecca Schaefer; Jason Farquhar; Yvonne Blokland; Makiko Sadakata; Peter Desain

In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2014

Combined EEG-fNIRS Decoding of Motor Attempt and Imagery for Brain Switch Control: An Offline Study in Patients With Tetraplegia

Yvonne Blokland; Loukianos Spyrou; Dick H. J. Thijssen; Thijs M.H. Eijsvogels; W.N.J.M. Colier; Marianne J. Floor-Westerdijk; Rutger Vlek; Jörgen Bruhn; Jason Farquhar

Combining electrophysiological and hemodynamic features is a novel approach for improving current performance of brain switches based on sensorimotor rhythms (SMR). This study was conducted with a dual purpose: to test the feasibility of using a combined electroencephalogram/functional near-infrared spectroscopy (EEG-fNIRS) SMR-based brain switch in patients with tetraplegia, and to examine the performance difference between motor imagery and motor attempt for this user group. A general improvement was found when using both EEG and fNIRS features for classification as compared to using the single-modality EEG classifier, with average classification rates of 79% for attempted movement and 70% for imagined movement. For the control group, rates of 87% and 79% were obtained, respectively, where the “attempted movement” condition was replaced with “actual movement.” A combined EEG-fNIRS system might be especially beneficial for users who lack sufficient control of current EEG-based brain switches. The average classification performance in the patient group for attempted movement was significantly higher than for imagined movement using the EEG-only as well as the combined classifier, arguing for the case of a paradigm shift in current brain switch research.


Journal of Neural Engineering | 2013

A multi-signature brain-computer interface: use of transient and steady-state responses.

Marianne Severens; Jason Farquhar; Jacques Duysens; Peter Desain

OBJECTIVE The aim of this paper was to increase the information transfer in brain-computer interfaces (BCI). Therefore, a multi-signature BCI was developed and investigated. Stimuli were designed to simultaneously evoke transient somatosensory event-related potentials (ERPs) and steady-state somatosensory potentials (SSSEPs) and the ERPs and SSSEPs in isolation. APPROACH Twelve subjects participated in two sessions. In the first session, the single and combined stimulation conditions were compared on these somatosensory responses and on the classification performance. In the second session the on-line performance with the combined stimulation was evaluated while subjects received feedback. Furthermore, in both sessions, the performance based on ERP and SSSEP features was compared. MAIN RESULTS No difference was found in the ERPs and SSSEPs between stimulation conditions. The combination of ERP and SSSEP features did not perform better than with ERP features only. In both sessions, the classification performances based on ERP and combined features were higher than the classification based on SSSEP features. SIGNIFICANCE Although the multi-signature BCI did not increase performance, it also did not negatively impact it. Therefore, such stimuli could be used and the best performing feature set could then be chosen individually.


Clinical Neurophysiology | 2011

Shared mechanisms in perception and imagery of auditory accents.

Rutger Vlek; Rebecca Schaefer; C.C.A.M. Gielen; Jason Farquhar; Peter Desain

OBJECTIVE An auditory rhythm can be perceived as a sequence of accented (loud) and non-accented (soft) beats or it can be imagined. Subjective rhythmization refers to the induction of accenting patterns during the presentation of identical auditory pulses at an isochronous rate. It can be an automatic process, but it can also be voluntarily controlled. We investigated whether imagined accents can be decoded from brain signals on a single-trial basis, and if there is information shared between perception and imagery in the contrast of accents and non-accents. METHODS Ten subjects perceived and imagined three different metric patterns (two-, three-, and four-beat) superimposed on a steady metronome while electroencephalography (EEG) measurements were made. Shared information between perception and imagery EEG is investigated by means of principal component analysis and by means of single-trial classification. RESULTS Classification of accented from non-accented beats was possible with an average accuracy of 70% for perception and 61% for imagery data. Cross-condition classification yielded significant performance above chance level for a classifier trained on perception and tested on imagery data (up to 66%), and vice versa (up to 60%). CONCLUSIONS Results show that detection of imagined accents is possible and reveal similarity in brain signatures relevant to distinction of accents from non-accents in perception and imagery. SIGNIFICANCE Our results support the idea of shared mechanisms in perception and imagery for auditory processing. This is relevant for a number of clinical settings, most notably by elucidating the basic mechanisms of rhythmic auditory cuing paradigms, e.g. as used in motor rehabilitation or therapy for Parkinsons disease. As a novel Brain-Computer Interface (BCI) paradigm, our results imply a reduction of the necessary BCI training in healthy subjects and in patients.

Collaboration


Dive into the Jason Farquhar's collaboration.

Top Co-Authors

Avatar

Peter Desain

Edinburgh College of Art

View shared research outputs
Top Co-Authors

Avatar

Yvonne Blokland

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rutger Vlek

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Jörgen Bruhn

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Pim Haselager

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeroen Geuze

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge