Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siamak Yousefi is active.

Publication


Featured researches published by Siamak Yousefi.


Investigative Ophthalmology & Visual Science | 2016

Optical Coherence Tomography Angiography Vessel Density in Healthy, Glaucoma Suspect, and Glaucoma Eyes

Adeleh Yarmohammadi; Linda M. Zangwill; Alberto Diniz-Filho; Min Hee Suh; Patricia Isabel C. Manalastas; Naeem Fatehee; Siamak Yousefi; Akram Belghith; Luke J. Saunders; Felipe A. Medeiros; David Huang; Robert N. Weinreb

Purpose The purpose of this study was to compare retinal nerve fiber layer (RNFL) thickness and optical coherence tomography angiography (OCT-A) retinal vasculature measurements in healthy, glaucoma suspect, and glaucoma patients. Methods Two hundred sixty-one eyes of 164 healthy, glaucoma suspect, and open-angle glaucoma (OAG) participants from the Diagnostic Innovations in Glaucoma Study with good quality OCT-A images were included. Retinal vasculature information was summarized as a vessel density map and as vessel density (%), which is the proportion of flowing vessel area over the total area evaluated. Two vessel density measurements extracted from the RNFL were analyzed: (1) circumpapillary vessel density (cpVD) measured in a 750-μm-wide elliptical annulus around the disc and (2) whole image vessel density (wiVD) measured over the entire image. Areas under the receiver operating characteristic curves (AUROC) were used to evaluate diagnostic accuracy. Results Age-adjusted mean vessel density was significantly lower in OAG eyes compared with glaucoma suspects and healthy eyes. (cpVD: 55.1 ± 7%, 60.3 ± 5%, and 64.2 ± 3%, respectively; P < 0.001; and wiVD: 46.2 ± 6%, 51.3 ± 5%, and 56.6 ± 3%, respectively; P < 0.001). For differentiating between glaucoma and healthy eyes, the age-adjusted AUROC was highest for wiVD (0.94), followed by RNFL thickness (0.92) and cpVD (0.83). The AUROCs for differentiating between healthy and glaucoma suspect eyes were highest for wiVD (0.70), followed by cpVD (0.65) and RNFL thickness (0.65). Conclusions Optical coherence tomography angiography vessel density had similar diagnostic accuracy to RNFL thickness measurements for differentiating between healthy and glaucoma eyes. These results suggest that OCT-A measurements reflect damage to tissues relevant to the pathophysiology of OAG.


IEEE Transactions on Biomedical Engineering | 2014

Glaucoma Progression Detection Using Structural Retinal Nerve Fiber Layer Measurements and Functional Visual Field Points

Siamak Yousefi; Michael H. Goldbaum; Madhusudhanan Balasubramanian; Tzyy-Ping Jung; Robert N. Weinreb; Felipe A. Medeiros; Linda M. Zangwill; Jeffrey M. Liebmann; Christopher A. Girkin; Christopher Bowd

Machine learning classifiers were employed to detect glaucomatous progression using longitudinal series of structural data extracted from retinal nerve fiber layer thickness measurements and visual functional data recorded from standard automated perimetry tests. Using the collected data, a longitudinal feature vector was created for each patients eye by computing the norm 1 difference vector of the data at the baseline and at each follow-up visit. The longitudinal features from each patients eye were then fed to the machine learning classifier to classify each eye as stable or progressed over time. This study was performed using several machine learning classifiers including Bayesian, Lazy, Meta, and Tree, composing different families. Combinations of structural and functional features were selected and ranked to determine the relative effectiveness of each feature. Finally, the outcomes of the classifiers were assessed by several performance metrics and the effectiveness of structural and functional features were analyzed.


international conference on consumer electronics | 2011

A new auto-focus sharpness function for digital and smart-phone cameras

Siamak Yousefi; Mohammad T. Rahman; Nasser Kehtarnavaz; Mark Gamadia

Passive auto-focusing is a key feature in consumer level digital and smart-phone cameras and is used to capture focused images without any user intervention. This paper introduces a new sharpness function for achieving passive auto-focusing, where the image sharpness information is used to bring it into focus. A comparison is made between this introduced sharpness function and the commonly used sharpness functions in terms of accuracy and computation time. The results obtained indicate that the introduced sharpness function possesses a comparable accuracy while demanding less computation time.


IEEE Transactions on Biomedical Engineering | 2014

Learning From Data: Recognizing Glaucomatous Defect Patterns and Detecting Progression From Visual Field Measurements

Siamak Yousefi; Michael H. Goldbaum; Madhusudhanan Balasubramanian; Felipe A. Medeiros; Linda M. Zangwill; Jeffrey M. Liebmann; Christopher A. Girkin; Robert N. Weinreb; Christopher Bowd

A hierarchical approach to learn from visual field data was adopted to identify glaucomatous visual field defect patterns and to detect glaucomatous progression. The analysis pipeline included three stages, namely, clustering, glaucoma boundary limit detection, and glaucoma progression detection testing. First, cross-sectional visual field tests collected from each subject were clustered using a mixture of Gaussians and model parameters were estimated using expectation maximization. The visual field clusters were further estimated to recognize glaucomatous visual field defect patterns by decomposing each cluster into several axes. The glaucoma visual field defect patterns along each axis then were identified. To derive a definition of progression, the longitudinal visual fields of stable glaucoma eyes on the abnormal cluster axes were projected and the slope was approximated using linear regression (LR) to determine the confidence limit of each axis. For glaucoma progression detection, the longitudinal visual fields of each eye on the abnormal cluster axes were projected and the slope was approximated by LR. Progression was assigned if the progression rate was greater than the boundary limit of the stable eyes; otherwise, stability was assumed. The proposed method was compared to a recently developed progression detection method and to clinically available glaucoma progression detection software. The clinical accuracy of the proposed pipeline was as good as or better than the currently available methods.


IEEE Transactions on Biomedical Engineering | 2012

Improved Labeling of Subcortical Brain Structures in Atlas-Based Segmentation of Magnetic Resonance Images

Siamak Yousefi; Nasser Kehtarnavaz; Ali Gholipour

Precise labeling of subcortical structures plays a key role in functional neurosurgical applications. Labels from an atlas image are propagated to a patient image using atlas-based segmentation. Atlas-based segmentation is highly dependent on the registration framework used to guide the atlas label propagation. This paper focuses on atlas-based segmentation of subcortical brain structures and the effect of different registration methods on the generated subcortical labels. A single-step and three two-step registration methods appearing in the literature based on affine and deformable registration algorithms in the ANTS and FSL algorithms are considered. Experiments are carried out with two atlas databases of IBSR and LPBA40. Six segmentation metrics consisting of Dice overlap, relative volume error, false positive, false negative, surface distance, and spatial extent are used for evaluation. Segmentation results are reported individually and as averages for nine subcortical brain structures. Based on two statistical tests, the results are ranked. In general, among four different registration strategies investigated in this paper, a two-step registration consisting of an initial affine registration followed by a deformable registration applied to subcortical structures provides superior segmentation outcomes. This method can be used to provide an improved labeling of the subcortical brain structures in MRIs for different applications.


Image and Vision Computing | 2010

Symmetric deformable image registration via optimization of information theoretic measures

Ali Gholipour; Nasser Kehtarnavaz; Siamak Yousefi; Kaundinya S. Gopinath; Richard W. Briggs

The use of information theoretic measures (ITMs) has been steadily growing in image processing, bioinformatics, and pattern classification. Although the ITMs have been extensively used in rigid and affine registration of multi-modal images, their computation and accuracy are critical issues in deformable image registration. Three important aspects of using ITMs in multi-modal deformable image registration are considered in this paper: computation, inverse consistency, and accuracy; a symmetric formulation of the deformable image registration problem through the computation of derivatives and resampling on both source and target images, and sufficient criteria for inverse consistency are presented for the purpose of achieving more accurate registration. The techniques of estimating ITMs are examined and analytical derivatives are derived for carrying out the optimization in a computationally efficient manner. ITMs based on Shannons and Renyis definitions are considered and compared. The obtained evaluation results via registration functions, and controlled deformable registration of multi-modal digital brain phantom and in vivo magnetic resonance brain images show the improved accuracy and efficiency of the developed formulation. The results also indicate that despite the recent favorable studies towards the use of ITMs based on Renyis definitions, these measures are seen not to provide improvements in this type of deformable registration as compared to ITMs based on Shannons definitions.


PLOS ONE | 2014

Glaucomatous patterns in Frequency Doubling Technology (FDT) perimetry data identified by unsupervised machine learning classifiers.

Christopher Bowd; Robert N. Weinreb; Madhusudhanan Balasubramanian; Intae Lee; Gil-Jin Jang; Siamak Yousefi; Linda M. Zangwill; Felipe A. Medeiros; Christopher A. Girkin; Jeffrey M. Liebmann; Michael H. Goldbaum

Purpose The variational Bayesian independent component analysis-mixture model (VIM), an unsupervised machine-learning classifier, was used to automatically separate Matrix Frequency Doubling Technology (FDT) perimetry data into clusters of healthy and glaucomatous eyes, and to identify axes representing statistically independent patterns of defect in the glaucoma clusters. Methods FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal FDT results from the UCSD-based Diagnostic Innovations in Glaucoma Study (DIGS) and African Descent and Glaucoma Evaluation Study (ADAGES). For all eyes, VIM input was 52 threshold test points from the 24-2 test pattern, plus age. Results FDT mean deviation was −1.00 dB (S.D. = 2.80 dB) and −5.57 dB (S.D. = 5.09 dB) in FDT-normal eyes and FDT-abnormal eyes, respectively (p<0.001). VIM identified meaningful clusters of FDT data and positioned a set of statistically independent axes through the mean of each cluster. The optimal VIM model separated the FDT fields into 3 clusters. Cluster N contained primarily normal fields (1109/1190, specificity 93.1%) and clusters G1 and G2 combined, contained primarily abnormal fields (651/786, sensitivity 82.8%). For clusters G1 and G2 the optimal number of axes were 2 and 5, respectively. Patterns automatically generated along axes within the glaucoma clusters were similar to those known to be indicative of glaucoma. Fields located farther from the normal mean on each glaucoma axis showed increasing field defect severity. Conclusions VIM successfully separated FDT fields from healthy and glaucoma eyes without a priori information about class membership, and identified familiar glaucomatous patterns of loss.


Journal of Visual Communication and Image Representation | 2013

Evaluating similarity measures for brain image registration

Qolamreza R. Razlighi; Nasser Kehtarnavaz; Siamak Yousefi

Evaluation of similarity measures for image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for brain image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded images and are also more successful in performing intermodal image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D brain image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater image degradation and provides more consistent outcomes for intermodal brain image registration.


international conference on image processing | 2010

Facial expression recognition based on diffeomorphic matching

Siamak Yousefi; Minh Phuoc Nguyen; Nasser Kehtarnavaz; Yan Cao

This paper presents a new framework for facial expression recognition based on diffeomorphic matching. First landmarks are selected based on a manual or automatic method. All of the landmarks from different images are registered to a reference landmark set using a rigid registration algorithm. The pair-wise geodesic distance between all sets of landmarks are then computed using diffeomorphic matching. Finally, a K-Nearest Neighbor classifier (KNN) is used to classify a query image using the geodesic distances. Both the classification and classical MultiDimensional Scaling results show that geodesic distance is more effective than Euclidean distance on capturing the face shape variation.


Translational Vision Science & Technology | 2016

Unsupervised Gaussian Mixture-Model With Expectation Maximization for Detecting Glaucomatous Progression in Standard Automated Perimetry Visual Fields

Siamak Yousefi; Madhusudhanan Balasubramanian; Michael H. Goldbaum; Felipe A. Medeiros; Linda M. Zangwill; Robert N. Weinreb; Jeffrey M. Liebmann; Christopher A. Girkin; Christopher Bowd

Purpose To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM–progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. Methods GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Results Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. Conclusions GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Translational Relevance Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning.

Collaboration


Dive into the Siamak Yousefi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nasser Kehtarnavaz

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akram Belghith

University of California

View shared research outputs
Top Co-Authors

Avatar

Christopher A. Girkin

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Jeffrey M. Liebmann

Columbia University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge