Seyed Sadegh Mohseni Salehi
Northeastern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Seyed Sadegh Mohseni Salehi.
IEEE Transactions on Medical Imaging | 2017
Seyed Sadegh Mohseni Salehi; Deniz Erdogmus; Ali Gholipour
Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.
NeuroImage | 2017
Bahram Marami; Seyed Sadegh Mohseni Salehi; Onur Afacan; Benoit Scherrer; Caitlin K. Rollins; Edward Yang; Judy A. Estroff; Simon K. Warfield; Ali Gholipour
&NA; Diffusion weighted magnetic resonance imaging, or DWI, is one of the most promising tools for the analysis of neural microstructure and the structural connectome of the human brain. The application of DWI to map early development of the human connectome in‐utero, however, is challenged by intermittent fetal and maternal motion that disrupts the spatial correspondence of data acquired in the relatively long DWI acquisitions. Fetuses move continuously during DWI scans. Reliable and accurate analysis of the fetal brain structural connectome requires careful compensation of motion effects and robust reconstruction to avoid introducing bias based on the degree of fetal motion. In this paper we introduce a novel robust algorithm to reconstruct in‐vivo diffusion‐tensor MRI (DTI) of the moving fetal brain and show its effect on structural connectivity analysis. The proposed algorithm involves multiple steps of image registration incorporating a dynamic registration‐based motion tracking algorithm to restore the spatial correspondence of DWI data at the slice level and reconstruct DTI of the fetal brain in the standard (atlas) coordinate space. A weighted linear least squares approach is adapted to remove the effect of intra‐slice motion and reconstruct DTI from motion‐corrected data. The proposed algorithm was tested on data obtained from 21 healthy fetuses scanned in‐utero at 22–38 weeks gestation. Significantly higher fractional anisotropy values in fiber‐rich regions, and the analysis of whole‐brain tractography and group structural connectivity, showed the efficacy of the proposed method compared to the analyses based on original data and previously proposed methods. The results of this study show that slice‐level motion correction and robust reconstruction is necessary for reliable in‐vivo structural connectivity analysis of the fetal brain. Connectivity analysis based on graph theoretic measures show high degree of modularity and clustering, and short average characteristic path lengths indicative of small‐worldness property of the fetal brain network. These findings comply with previous findings in newborns and a recent study on fetuses. The proposed algorithm can provide valuable information from DWI of the fetal brain not available in the assessment of the original 2D slices and may be used to more reliably study the developing fetal brain connectome.
IEEE Journal of Selected Topics in Signal Processing | 2016
Hooman Nezamfar; Seyed Sadegh Mohseni Salehi; Mohammad Moghadamfalahi; Deniz Erdogmus
Brain computer interfaces (BCIs) offer individuals with disabilities an alternative channel of communication and control, hence they have been receiving increasing interest. BCIs can also be useful for healthy individuals in situations limiting their movement or where other computer interaction modalities need to be supplemented. Event-related and steady state visually evoked potentials (SSVEPs) are the top two brain signal types used in developing BCIs that allow the user to make a choice from a discrete set of options, including the selection of commands from a menu for a robot or computer to perform, as well as typing letters, symbols, or icons for communication. Popular BCI speller paradigms, such as the P300 Matrix Speller, RSVP Keyboard TM or SSVEP spellers in which the letters on the keyboard display flicker, are sensitive to the font, size and presentation speed. In addition, sensitivity to eye gaze control plays a significant role in usability of most of these keyboards. We present a code-VEP based BCI, utilized in a language model assisted keyboard application. Utilizing a cursor based selection method, stimuli and targets are separated. FlashTypeTM separates visual stimulation from alphabet presentation to achieve performance invariance under presentation variations. Therefore, FlashTypeTM can be used for all languages, including the ones containing symbols and icons. FlashTypeTM, contains a Static Keyboard, a row of Suggested Characters and a row of Predicted Words. FlashTypeTM, by default, uses only one EEG electrode and four stimuli. The system can operate using only one stimulus at a lower selection rate, useful for individuals with limited or no gaze control. This feature is to be explored in future. Replacing letters with text or icons representing commands would allow controlling a computer or robot. In this study, FlashTypeTM has been evaluated by three individuals performing 10 Mastery tasks. In depth experimentation, such as assessing the system with potential end users writing long passages of text, will be done in future.
arXiv: Computer Vision and Pattern Recognition | 2017
Seyed Sadegh Mohseni Salehi; Deniz Erdogmus; Ali Gholipour
Fully convolutional deep neural networks carry out excellent potential for fast and accurate image segmentation. One of the main challenges in training these networks is data imbalance, which is particularly problematic in medical imaging applications such as lesion segmentation where the number of lesion voxels is often much lower than the number of non-lesion voxels. Training with unbalanced data can lead to predictions that are severely biased towards high precision but low recall (sensitivity), which is undesired especially in medical applications where false negatives are much less tolerable than false positives. Several methods have been proposed to deal with this problem including balanced sampling, two step training, sample re-weighting, and similarity loss functions. In this paper, we propose a generalized loss function based on the Tversky index to address the issue of data imbalance and achieve much better trade-off between precision and recall in training 3D fully convolutional deep neural networks. Experimental results in multiple sclerosis lesion segmentation on magnetic resonance images show improved F2 score, Dice coefficient, and the area under the precision-recall curve in test data. Based on these results we suggest Tversky loss function as a generalized framework to effectively train deep neural networks.
ieee signal processing in medicine and biology symposium | 2015
Hooman Nezamfar; Seyed Sadegh Mohseni Salehi; Deniz Erdogmus
Steady state visual evoked potentials are widely exploited in EEG-based BCI systems. Frequency and code based flickering stimuli are the two major methods used to induce SSVEP responses. Considering the tiring effect of flashing icons in the long run, the less noticeable the flashes become, the more tolerable they will be. Based on the user ratings, who experienced both, code and frequency based stimulation, the code based method is less tiring. Hence, we used our SSVEP based BCI system in the code-based mode for this study. Among several aspects of stimuli affecting the system performance and user experience, for this study, we considered color, bit presentation rate and the control bit sequence length as three significant factors. Our main goal is to achieve more pleasant stimuli, while maintaining a high performance. Although, these factors seldom coincide, but, our findings showed that it is possible to find an almost optimum point of operation. In this study, a battery of calibration sessions with three different opponent color pairs of black and white, red and green and blue and yellow, three bit presentation rates of 30, 60 and 110 bps and three control bit sequence lengths of 31, 63 and 127 bits were performed. Our findings are suggestive of a performance increase using opponent colorful pairs as opposed to black and white, with the red-green color pair exceeding the performance of others. Consistently, among all the individuals participating in the study, one second of EEG evidence seems to be adequate to maximize the classification accuracy. This translates to {60 bps , 63 bits} and {110 bps, 127 bits} pairs of bit presentation rate and m-sequence length being the best performers. With a slight decrease in classification performance, half a second of EEG data could be considered when speed is of concern.
Brain Sciences | 2018
Hooman Nezamfar; Seyed Sadegh Mohseni Salehi; Matt Higger; Deniz Erdogmus
Even with state-of-the-art techniques there are individuals whose paralysis prevents them from communicating with others. Brain–Computer-Interfaces (BCI) aim to utilize brain waves to construct a voice for those whose needs remain unmet. In this paper we compare the efficacy of a BCI input signal, code-VEP via Electroencephalography, against eye gaze tracking, among the most popular modalities used. These results, on healthy individuals without paralysis, suggest that while eye tracking works well for some, it does not work well or at all for others; the latter group includes individuals with corrected vision or those who squint their eyes unintentionally while focusing on a task. It is also evident that the performance of the interface is more sensitive to head/body movements when eye tracking is used as the input modality, compared to using c-VEP. Sensitivity to head/body movement could be better in eye tracking systems which are tracking the head or mounted on the face and are designed specifically as assistive devices. The sample interface developed for this assessment has the same reaction time when driven with c-VEP or with eye tracking; approximately 0.5–1 second is needed to make a selection among the four options simultaneously presented. Factors, such as system reaction time and robustness play a crucial role in participant preferences.
ieee signal processing in medicine and biology symposium | 2015
Yeganeh M. Marghi; Sumientra Rampersad; Seyed Sadegh Mohseni Salehi; Moritz Dannhauer; Dana H. Brooks; Misha Pavel; Deniz Erdogmus
Transcranial current stimulation (tCS) is a non-invasive brain stimulation technique that has shown promise for studying and improving brain function. It can be applied with low-amplitude direct (tDCS), alternating (tACS) or random noise (tRNS) currents. EEG, with its high temporal resolution, portability, and affordability, offers great advantages in investigating the effects of tCS on brain activity. However, concurrent EEG acquisition and tCS stimulation suffers from the drawback that injected current induces significant artifacts on simultaneously acquired EEG. Furthermore, stimulus-current-induced artifacts in measured voltages have powers that are large compared to that of EEG, in the frequency range of interest for EEG analysis. While simple high-pass filtering of the EEG would eliminate artifacts from tDCS, it is not suitable when stimulating with frequencies in the range of significant EEG activity (1-40 Hz). This occurs both in low-frequency tACS/tRNS and in high-pass filtered tRNS, as even in the latter case substantial power will remain at EEG frequencies. In such cases, attenuating tRNS artifacts in EEG requires a more comprehensive model, such as the one we present here.
international symposium on biomedical imaging | 2018
Seyed Sadegh Mohseni Salehi; Seyed Raein Hashemi; Clemente Velasco-Annis; Abdelhakim Ouaalam; Judy A. Estroff; Deniz Erdogmus; Simon K. Warfield; Ali Gholipour
arXiv: Computer Vision and Pattern Recognition | 2017
Seyed Sadegh Mohseni Salehi; Deniz Erdogmus; Ali Gholipour
international conference of the ieee engineering in medicine and biology society | 2017
Seyed Sadegh Mohseni Salehi; Mohammad Moghadamfalahi; Fernando Quivira; Alexander Piers; Hooman Nezamfar; Deniz Erdogmus