Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hooman Nezamfar is active.

Publication


Featured researches published by Hooman Nezamfar.


Journal of Neuroengineering and Rehabilitation | 2014

Quantitative change of EEG and respiration signals during mindfulness meditation

Asieh Ahani; Helané Wahbeh; Hooman Nezamfar; Meghan Miller; Deniz Erdogmus; Barry S. Oken

BackgroundThis study investigates measures of mindfulness meditation (MM) as a mental practice, in which a resting but alert state of mind is maintained. A population of older people with high stress level participated in this study, while electroencephalographic (EEG) and respiration signals were recorded during a MM intervention. The physiological signals during meditation and control conditions were analyzed with signal processing.MethodsEEG and respiration data were collected and analyzed on 34 novice meditators after a 6-week meditation intervention. Collected data were analyzed with spectral analysis, phase analysis and classification to evaluate an objective marker for meditation.ResultsDifferent frequency bands showed differences in meditation and control conditions. Furthermore, we established a classifier using EEG and respiration signals with a higher accuracy (85%) at discriminating between meditation and control conditions than a classifier using the EEG signal only (78%).ConclusionSupport vector machine (SVM) classifier with EEG and respiration feature vector is a viable objective marker for meditation ability. This classifier should be able to quantify different levels of meditation depth and meditation experience in future studies.


international conference of the ieee engineering in medicine and biology society | 2011

Fusion with language models improves spelling accuracy for ERP-based brain computer interface spellers

Umut Orhan; Deniz Erdogmus; Brian Roark; Shalini Purwar; Kenneth E. Hild; Barry S. Oken; Hooman Nezamfar; Melanie Fried-Oken

Event related potentials (ERP) corresponding to a stimulus in electroencephalography (EEG) can be used to detect the intent of a person for brain computer interfaces (BCI). This paradigm is widely utilized to build letter-by-letter text input systems using BCI. Nevertheless using a BCI-typewriter depending only on EEG responses will not be sufficiently accurate for single-trial operation in general, and existing systems utilize many-trial schemes to achieve accuracy at the cost of speed. Hence incorporation of a language model based prior or additional evidence is vital to improve accuracy and speed. In this paper, we study the effects of Bayesian fusion of an n-gram language model with a regularized discriminant analysis ERP detector for EEG-based BCIs. The letter classification accuracies are rigorously evaluated for varying language model orders as well as number of ERP-inducing trials. The results demonstrate that the language models contribute significantly to letter classification accuracy. Specifically, we find that a BCI-speller supported by a 4-gram language model may achieve the same performance using 3-trial ERP classification for the initial letters of the words and using single trial ERP classification for the subsequent ones. Overall, fusion of evidence from EEG and language models yields a significant opportunity to increase the word rate of a BCI based typing system.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2015

Language-Model Assisted Brain Computer Interface for Typing: A Comparison of Matrix and Rapid Serial Visual Presentation

Mohammad Moghadamfalahi; Umut Orhan; Murat Akcakaya; Hooman Nezamfar; Melanie Fried-Oken; Deniz Erdogmus

Noninvasive electroencephalography (EEG)-based brain-computer interfaces (BCIs) popularly utilize event-related potential (ERP) for intent detection. Specifically, for EEG-based BCI typing systems, different symbol presentation paradigms have been utilized to induce ERPs. In this manuscript, through an experimental study, we assess the speed, recorded signal quality, and system accuracy of a language-model-assisted BCI typing system using three different presentation paradigms: a 4 × 7 matrix paradigm of a 28-character alphabet with row-column presentation (RCP) and single-character presentation (SCP), and rapid serial visual presentation (RSVP) of the same. Our analyses show that signal quality and classification accuracy are comparable between the two visual stimulus presentation paradigms. In addition, we observe that while the matrix-based paradigm can be generally employed with lower inter-trial-interval (ITI) values, the best presentation paradigm and ITI value configuration is user dependent. This potentially warrants offering both presentation paradigms and variable ITI options to users of BCI typing systems.


International Journal of Imaging Systems and Technology | 2011

Decoding of multichannel EEG activity from the visual cortex in response to pseudorandom binary sequences of visual stimuli

Hooman Nezamfar; Umut Orhan; Shalini Purwar; Kenneth E. Hild; Barry S. Oken; Deniz Erdogmus

Electroencephalography (EEG) signals have been an attractive choice to build noninvasive brain computer interfaces (BCIs) for nearly three decades. Depending on the stimuli, there are different responses which one could get from EEG signals. One of them is the P300 response which is a visually evoked response that has been widely studied. Steady state visually evoked potential (SSVEP) is the response to an oscillating stimulus with fixed frequency, which is detectable from the visual cortex. However, there exists some work on using an m‐sequence with different lags as the control sequence of the flickering stimuli. In this study, we used several m‐sequences instead of one with the intent of increasing the number of possible command options in a BCI setting. We also tested two different classifiers to decide between the m‐sequences and study the performance of multi channel classifiers versus single channel classifiers. The study is done over two different flickering frequencies, 15 and 30 Hz to investigate the effect of flickering frequency on the accuracy of the classification methods. Our study shows that the EEG channels are correlated, and although all the channels contain some useful information, but combining them with a multi channel classifier based on the assumption of having conditional independence will not improve the classification accuracy. In addition, we were able to get reasonably good results using the 30 Hz flickering frequency comparing with 15 Hz flickering frequency that will give us the ability of having a shorter training and decision making time.


international conference on acoustics, speech, and signal processing | 2011

On visually evoked potentials in eeg induced by multiple pseudorandom binary sequences for brain computer interface design

Hooman Nezamfar; Umut Orhan; Deniz Erdogmus; Kenneth E. Hild; Shalini Purwar; Barry S. Oken; Melanie Fried-Oken

Visually evoked potentials have attracted great attention in the last two decades for the purpose of brain computer interface design. Visually evoked P300 response is a major signal of interest that has been widely studied. Steady state visual evoked potentials that occur in response to periodically flickering visual stimuli have been primarily investigated as an alternative. There also exists some work on the use of an m-sequence and its shifted versions to induce responses that are primarily in the visual cortex but are not periodic. In this paper, we study the use of multiple m-sequences for intent discrimination in the brain interface, as opposed to a single m-sequence whose shifted versions are to be discriminated from each other. Specifically, we used four different m-sequences of length 31. Our main goal is to study if the bit presentation rate of the m-sequences have an impact on classification accuracy and speed. In this initial study, where we compared two basic classifier schemes using EEG data acquired with 15Hz and 30Hz bit presentation rates, our results are mixed; while on one subject, we got promising results indicating bit presentation rate could be increased without decrease in classification accuracy; thus leading to a faster decision-rate in the brain interface, on our second subject, this conclusion is not supported. Further detailed experimental studies as well as signal processing methodology design, especially for information fusion across EEG channels, will be conducted to investigate this question further.


international ieee/embs conference on neural engineering | 2013

Change in physiological signals during mindfulness meditation

Asieh Ahani; Helané Wahbeh; Meghan Miller; Hooman Nezamfar; Deniz Erdogmus; Barry S. Oken

Mindfulness meditation (MM) is an inward mental practice, in which a resting but alert state of mind is maintained. MM intervention was performed for a population of older people with high stress levels. This study assessed signal processing methodologies of electroencephalographic (EEG) and respiration signals during meditation and control condition to aid in quantification of the meditative state. EEG and respiration data were collected and analyzed on 34 novice meditators after a 6-week meditation intervention. Collected data were analyzed with spectral analysis and support vector machine classification to evaluate an objective marker for meditation. We observed meditation and control condition differences in the alpha, beta and theta frequency bands. Furthermore, we established a classifier using EEG and respiration signals with a higher accuracy at discriminating between meditation and control conditions than one using the EEG signal only. EEG and respiration based classifier is a viable objective marker for meditation ability. Future studies should quantify different levels of meditation depth and meditation experience using this classifier. Development of an objective physiological meditation marker will allow the mind-body medicine field to advance by strengthening rigor of methods.


IEEE Signal Processing Letters | 2015

A Bayesian Framework for Intent Detection and Stimulation Selection in SSVEP BCIs

Matt Higger; Murat Akcakaya; Hooman Nezamfar; Gerald LaMountain; Umut Orhan; Deniz Erdogmus

Currently, many Brain Computer Interfaces (BCI) classifiers output point estimates of user intent which make it difficult to incorporate context prior information or assign a principled confidence measurement to a decision. We propose a Bayesian framework to extend current Steady State Visually Evoked Potential (SSVEP) classifiers to a maximum a posteriori (MAP) classifiers by using a Kernel Density Estimate (KDE) to learn the distribution of features conditioned on stimulation class. To demonstrate our framework we extend Canonical Correlation Analysis (CCA) and Power Spectral Density (PSD) style methods. Traditionally, in either example, the class is estimated as the class associated with the maximum feature. Our framework increases performance by relaxing the assumption that a stimulation classs sample often maximizes its class-associated feature. Further, by leveraging the KDE, we present a method which estimates the performance of a classifier under different stimulation frequency sets. Using this, we optimize the selection of stimulation frequencies from those present in a training set.


IEEE Journal of Selected Topics in Signal Processing | 2016

FlashType

Hooman Nezamfar; Seyed Sadegh Mohseni Salehi; Mohammad Moghadamfalahi; Deniz Erdogmus

Brain computer interfaces (BCIs) offer individuals with disabilities an alternative channel of communication and control, hence they have been receiving increasing interest. BCIs can also be useful for healthy individuals in situations limiting their movement or where other computer interaction modalities need to be supplemented. Event-related and steady state visually evoked potentials (SSVEPs) are the top two brain signal types used in developing BCIs that allow the user to make a choice from a discrete set of options, including the selection of commands from a menu for a robot or computer to perform, as well as typing letters, symbols, or icons for communication. Popular BCI speller paradigms, such as the P300 Matrix Speller, RSVP Keyboard TM or SSVEP spellers in which the letters on the keyboard display flicker, are sensitive to the font, size and presentation speed. In addition, sensitivity to eye gaze control plays a significant role in usability of most of these keyboards. We present a code-VEP based BCI, utilized in a language model assisted keyboard application. Utilizing a cursor based selection method, stimuli and targets are separated. FlashTypeTM separates visual stimulation from alphabet presentation to achieve performance invariance under presentation variations. Therefore, FlashTypeTM can be used for all languages, including the ones containing symbols and icons. FlashTypeTM, contains a Static Keyboard, a row of Suggested Characters and a row of Predicted Words. FlashTypeTM, by default, uses only one EEG electrode and four stimuli. The system can operate using only one stimulus at a lower selection rate, useful for individuals with limited or no gaze control. This feature is to be explored in future. Replacing letters with text or icons representing commands would allow controlling a computer or robot. In this study, FlashTypeTM has been evaluated by three individuals performing 10 Mastery tasks. In depth experimentation, such as assessing the system with potential end users writing long passages of text, will be done in future.


international workshop on machine learning for signal processing | 2016

^{\text{TM}}

Marzieh Haghighi; Mohammad Moghadamfalahi; Hooman Nezamfar; Murat Akcakaya; Deniz Erdogmus

Auditory-evoked noninvasive electroencephalography (EEG) based brain-computer interfaces (BCIs) could be useful for improved hearing aids in the future. This manuscript investigates the role of frequency and spatial features of audio signal in EEG activities in an auditory BCI system with the purpose of detecting the attended auditory source in a cocktail party setting. A cross correlation based feature between EEG and speech envelope is shown to be useful to discriminate attention in the case of two different speakers. Results indicate that, on average, for speaker and direction (of arrival) of audio signals classification, the presented approach yields 91% and 86% accuracy, respectively.


Brain-Computer Interfaces | 2014

: A Context-Aware c-VEP-Based BCI Typing Interface Using EEG Signals

Asieh Ahani; Karl Wiegand; Umut Orhan; Murat Akcakaya; Mohammad Moghadamfalahi; Hooman Nezamfar; Rupal Patel; Deniz Erdogmus

One of the principal application areas for brain-computer interface (BCI) technology is augmentative and alternative communication (AAC), typically used by people with severe speech and physical disabilities (SSPI). Existing word- and phrase-based AAC solutions that employ BCIs that utilize electroencephalography (EEG) are sometimes supplemented by icons. Icon-based BCI systems that use binary signaling methods, such as P300 detection, combine hierarchical layouts with some form of scanning. The rapid serial visual presentation (RSVP) IconMessenger combines P300 signal detection with the icon-based semantic message construction system of iconCHAT. Language models are incorporated in the inference engine and some modifications that facilitate the use of RSVP were performed such as icon semantic role order selection and the tight fusion of language evidence and EEG evidence. The results of a study conducted with 10 healthy participants suggest that the system has potential as an AAC system in real-time typi...

Collaboration


Dive into the Hooman Nezamfar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Murat Akcakaya

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt Higger

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge