Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edmund C. Lalor is active.

Publication


Featured researches published by Edmund C. Lalor.


EURASIP Journal on Advances in Signal Processing | 2005

Steady-state VEP-based brain-computer interface control in an immersive 3D gaming environment

Edmund C. Lalor; Simon P. Kelly; C. Finucane; R. Burke; R. Smith; Richard B. Reilly; Gary McDarby

This paper presents the application of an effective EEG-based brain-computer interface design for binary control in a visually elaborate immersive 3D game. The BCI uses the steady-state visual evoked potential (SSVEP) generated in response to phase-reversing checkerboard patterns. Two power-spectrum estimation methods were employed for feature extraction in a series of offline classification tests. Both methods were also implemented during real-time game play. The performance of the BCI was found to be robust to distracting visual stimulation in the game and relatively consistent across six subjects, with 41 of 48 games successfully completed. For the best performing feature extraction method, the average real-time control accuracy across subjects was 89%. The feasibility of obtaining reliable control in such a visually rich environment using SSVEPs is thus demonstrated and the impact of this result is discussed.


Nature Neuroscience | 2012

Adolescent impulsivity phenotypes characterized by distinct brain networks

Robert Whelan; Patricia J. Conrod; Jean-Baptiste Poline; Anbarasu Lourdusamy; Tobias Banaschewski; Gareth J. Barker; Mark A. Bellgrove; Christian Büchel; Mark Byrne; Tarrant D.R. Cummins; Mira Fauth-Bühler; Herta Flor; Jürgen Gallinat; Andreas Heinz; Bernd Ittermann; Karl Mann; Jean-Luc Martinot; Edmund C. Lalor; Mark Lathrop; Eva Loth; Frauke Nees; Tomáš Paus; Marcella Rietschel; Michael N. Smolka; Rainer Spanagel; David N. Stephens; Maren Struve; Benjamin Thyreau; Sabine Vollstaedt-Klein; Trevor W. Robbins

The impulsive behavior that is often characteristic of adolescence may reflect underlying neurodevelopmental processes. Moreover, impulsivity is a multi-dimensional construct, and it is plausible that distinct brain networks contribute to its different cognitive, clinical and behavioral aspects. As these networks have not yet been described, we identified distinct cortical and subcortical networks underlying successful inhibitions and inhibition failures in a large sample (n = 1,896) of 14-year-old adolescents. Different networks were associated with drug use (n = 1,593) and attention-deficit hyperactivity disorder symptoms (n = 342). Hypofunctioning of a specific orbitofrontal cortical network was associated with likelihood of initiating drug use in early adolescence. Right inferior frontal activity was related to the speed of the inhibition process (n = 826) and use of illegal substances and associated with genetic variation in a norepinephrine transporter gene (n = 819). Our results indicate that both neural endophenotypes and genetic variation give rise to the various manifestations of impulsive behavior.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2005

Visual spatial attention tracking using high-density SSVEP data for independent brain-computer communication

Simon P. Kelly; Edmund C. Lalor; Richard B. Reilly; John J. Foxe

The steady-state visual evoked potential (SSVEP) has been employed successfully in brain-computer interface (BCI) research, but its use in a design entirely independent of eye movement has until recently not been reported. This paper presents strong evidence suggesting that the SSVEP can be used as an electrophysiological correlate of visual spatial attention that may be harnessed on its own or in conjunction with other correlates to achieve control in an independent BCI. In this study, 64-channel electroencephalography data were recorded from subjects who covertly attended to one of two bilateral flicker stimuli with superimposed letter sequences. Offline classification of left/right spatial attention was attempted by extracting SSVEPs at optimal channels selected for each subject on the basis of the scalp distribution of SSVEP magnitudes. This yielded an average accuracy of approximately 71% across ten subjects (highest 86%) comparable across two separate cases in which flicker frequencies were set within and outside the alpha range respectively. Further, combining SSVEP features with attention-dependent parieto-occipital alpha band modulations resulted in an average accuracy of 79% (highest 87%).


IEEE Transactions on Biomedical Engineering | 2005

Visual spatial attention control in an independent brain-computer interface

Simon P. Kelly; Edmund C. Lalor; Ciarán Finucane; Gary McDarby; Richard B. Reilly

This paper presents a novel brain computer interface (BCI) design employing visual evoked potential (VEP) modulations in a paradigm involving no dependency on peripheral muscles or nerves. The system utilizes electrophysiological correlates of visual spatial attention mechanisms, the self-regulation of which is naturally developed through continuous application in everyday life. An interface involving real-time biofeedback is described, demonstrating reduced training time in comparison to existing BCIs based on self-regulation paradigms. Subjects were cued to covertly attend to a sequence of letters superimposed on a flicker stimulus in one visual field while ignoring a similar stimulus of a different flicker frequency in the opposite visual field. Classification of left/right spatial attention is achieved by extracting steady-state visual evoked potentials (SSVEPs) elicited by the stimuli. Six out of eleven physically and neurologically healthy subjects demonstrate reliable control in binary decision-making, achieving at least 75% correct selections in at least one of only five sessions, each of approximately 12-min duration. The highest-performing subject achieved over 90% correct selections in each of four sessions. This independent BCI may provide a new method of real-time interaction for those with little or no peripheral control, with the added advantage of requiring only brief training.


Cerebral Cortex | 2015

Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG

James A. O'Sullivan; Alan J. Power; Nima Mesgarani; Siddharth Rajaram; John J. Foxe; Barbara G. Shinn-Cunningham; Malcolm Slaney; Shihab A. Shamma; Edmund C. Lalor

How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces.


European Journal of Neuroscience | 2010

Neural responses to uninterrupted natural speech can be extracted with precise temporal resolution

Edmund C. Lalor; John J. Foxe

The human auditory system has evolved to efficiently process individual streams of speech. However, obtaining temporally detailed responses to distinct continuous natural speech streams has hitherto been impracticable using standard neurophysiological techniques. Here a method is described which provides for the estimation of a temporally precise electrophysiological response to uninterrupted natural speech. We have termed this response AESPA (Auditory Evoked Spread Spectrum Analysis) and it represents an estimate of the impulse response of the auditory system. It is obtained by assuming that the recorded electrophysiological function represents a convolution of the amplitude envelope of a continuous speech stream with the to‐be‐estimated impulse response. We present examples of these responses using both scalp and intracranially recorded human EEG, which were obtained while subjects listened to a binaurally presented recording of a male speaker reading naturally from a classic work of fiction. This method expands the arsenal of stimulation types that can now be effectively used to derive auditory evoked responses and allows for the use of considerably more ecologically valid stimulation parameters. Some implications for future research efforts are presented.


European Journal of Neuroscience | 2012

At what time is the cocktail party? A late locus of selective attention to natural speech.

Alan J. Power; John J. Foxe; Emmajane Forde; Richard B. Reilly; Edmund C. Lalor

Distinguishing between speakers and focusing attention on one speaker in multi‐speaker environments is extremely important in everyday life. Exactly how the brain accomplishes this feat and, in particular, the precise temporal dynamics of this attentional deployment are as yet unknown. A long history of behavioral research using dichotic listening paradigms has debated whether selective attention to speech operates at an early stage of processing based on the physical characteristics of the stimulus or at a later stage during semantic processing. With its poor temporal resolution fMRI has contributed little to the debate, while EEG–ERP paradigms have been hampered by the need to average the EEG in response to discrete stimuli which are superimposed onto ongoing speech. This presents a number of problems, foremost among which is that early attention effects in the form of endogenously generated potentials can be so temporally broad as to mask later attention effects based on the higher level processing of the speech stream. Here we overcome this issue by utilizing the AESPA (auditory evoked spread spectrum analysis) method which allows us to extract temporally detailed responses to two concurrently presented speech streams in natural cocktail‐party‐like attentional conditions without the need for superimposed probes. We show attentional effects on exogenous stimulus processing in the 200–220 ms range in the left hemisphere. We discuss these effects within the context of research on auditory scene analysis and in terms of a flexible locus of attention that can be deployed at a particular processing stage depending on the task.


Journal of Neurophysiology | 2009

Resolving Precise Temporal Processing Properties of the Auditory System Using Continuous Stimuli

Edmund C. Lalor; Alan J. Power; Richard B. Reilly; John J. Foxe

In natural environments complex and continuous auditory stimulation is virtually ubiquitous. The human auditory system has evolved to efficiently process an infinity of everyday sounds, which range from short, simple bursts of noise to signals with a much higher order of information such as speech. Investigation of temporal processing in this system using the event-related potential (ERP) technique has led to great advances in our knowledge. However, this method is restricted by the need to present simple, discrete, repeated stimuli to obtain a useful response. Alternatively the continuous auditory steady-state response is used, although this method reduces the evoked response to its fundamental frequency component at the expense of useful information on the timing of response transmission through the auditory system. In this report, we describe a method for eliciting a novel ERP, which circumvents these limitations, known as the AESPA (auditory-evoked spread spectrum analysis). This method uses rapid amplitude modulation of audio carrier signals to estimate the impulse response of the auditory system. We show AESPA responses with high signal-to-noise ratios obtained using two types of carrier wave: a 1-kHz tone and broadband noise. To characterize these responses, they are compared with auditory-evoked potentials elicited using standard techniques. A number of similarities and differences between the responses are noted and these are discussed in light of the differing stimulation and analysis methods used. Data are presented that demonstrate the generalizability of the AESPA method and a number of applications are proposed.


NeuroImage | 2006

The VESPA: a method for the rapid estimation of a visual evoked potential.

Edmund C. Lalor; Barak A. Pearlmutter; Richard B. Reilly; Gary McDarby; John J. Foxe

Faster and less obtrusive means for measuring a Visual Evoked Potential would be valuable in clinical testing and basic neuroscience research. This study presents a method for accomplishing this by smoothly modulating the luminance of a visual stimulus using a stochastic process. Despite its visually unobtrusive nature, the rich statistical structure of the stimulus enables rapid estimation of the visual systems impulse response. The profile of these responses, which we call VESPAs, correlates with standard VEPs, with r=0.91, p<10(-28) for the group average. The time taken to obtain a VESPA with a given signal-to-noise ratio compares favorably to that required to obtain a VEP with a similar level of certainty. Additionally, we show that VESPA responses to two independent stimuli can be obtained simultaneously, which could drastically reduce the time required to collect responses to multiple stimuli. The new method appears to provide a useful alternative to standard VEP methods, and to have potential application both in clinical practice and to the study of sensory and perceptual functions.


Current Biology | 2015

Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing

Giovanni M. Di Liberto; James A. O’Sullivan; Edmund C. Lalor

The human ability to understand speech is underpinned by a hierarchical auditory system whose successive stages process increasingly complex attributes of the acoustic input. It has been suggested that to produce categorical speech perception, this system must elicit consistent neural responses to speech tokens (e.g., phonemes) despite variations in their acoustics. Here, using electroencephalography (EEG), we provide evidence for this categorical phoneme-level speech processing by showing that the relationship between continuous speech and neural activity is best described when that speech is represented using both low-level spectrotemporal information and categorical labeling of phonetic features. Furthermore, the mapping between phonemes and EEG becomes more discriminative for phonetic features at longer latencies, in line with what one might expect from a hierarchical system. Importantly, these effects are not seen for time-reversed speech. These findings may form the basis for future research on natural language processing in specific cohorts of interest and for broader insights into how brains transform acoustic input into meaning.

Collaboration


Dive into the Edmund C. Lalor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John J. Foxe

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Simon P. Kelly

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge