Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saman Nouranian is active.

Publication


Featured researches published by Saman Nouranian.


IEEE Transactions on Medical Imaging | 2015

A Multi-Atlas-Based Segmentation Framework for Prostate Brachytherapy

Saman Nouranian; Seyedeh Sara Mahdavi; Ingrid Spadinger; William J. Morris; Septimiu E. Salcudean; Purang Abolmaesumi

Low-dose-rate brachytherapy is a radiation treatment method for localized prostate cancer. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate in order to devise a plan to deliver sufficient radiation dose to the cancerous tissue. Brachytherapy planning involves delineation of contours in these images, which closely follow the prostate boundary, i.e., clinical target volume. This process is currently performed either manually or semi-automatically, which requires user interaction for landmark initialization. In this paper, we propose a multi-atlas fusion framework to automatically delineate the clinical target volume in ultrasound images. A dataset of a priori segmented ultrasound images, i.e., atlases, is registered to a target image. We introduce a pairwise atlas agreement factor that combines an image-similarity metric and similarity between a priori segmented contours. This factor is used in an atlas selection algorithm to prune the dataset before combining the atlas contours to produce a consensus segmentation. We evaluate the proposed segmentation approach on a set of 280 transrectal prostate volume studies. The proposed method produces segmentation results that are within the range of observer variability when compared to a semi-automatic segmentation technique that is routinely used in our cancer clinic.


IEEE Transactions on Medical Imaging | 2016

Learning-Based Multi-Label Segmentation of Transrectal Ultrasound Images for Prostate Brachytherapy

Saman Nouranian; Mahdi Ramezani; Ingrid Spadinger; William J. Morris; Septimiu E. Salcudean; Purang Abolmaesumi

Low-dose-rate prostate brachytherapy treatment takes place by implantation of small radioactive seeds in and sometimes adjacent to the prostate gland. A patient specific target anatomy for seed placement is usually determined by contouring a set of collected transrectal ultrasound images prior to implantation. Standard-of-care in prostate brachytherapy is to delineate the clinical target anatomy, which closely follows the real prostate boundary. Subsequently, the boundary is dilated with respect to the clinical guidelines to determine a planning target volume. Manual contouring of these two anatomical targets is a tedious task with relatively high observer variability. In this work, we aim to reduce the segmentation variability and planning time by proposing an efficient learning-based multi-label segmentation algorithm. We incorporate a sparse representation approach in our methodology to learn a dictionary of sparse joint elements consisting of images, and clinical and planning target volume segmentation. The generated dictionary inherently captures the relationships among elements, which also incorporates the institutional clinical guidelines. The proposed multi-label segmentation method is evaluated on a dataset of 590 brachytherapy treatment records by 5-fold cross validation. We show clinically acceptable instantaneous segmentation results for both target volumes.


IEEE Transactions on Biomedical Engineering | 2015

Ultrasound-Based Characterization of Prostate Cancer Using Joint Independent Component Analysis

Farhad Imani; Mahdi Ramezani; Saman Nouranian; Eli Gibson; Amir Khojaste; Mena Gaed; Madeleine Moussa; Jose A. Gomez; Cesare Romagnoli; Michael Leveridge; Silvia D. Chang; Aaron Fenster; D. Robert Siemens; Aaron D. Ward; Parvin Mousavi; Purang Abolmaesumi

Objective: This paper presents the results of a new approach for selection of RF time series features based on joint independent component analysis for in vivo characterization of prostate cancer. Methods: We project three sets of RF time series features extracted from the spectrum, fractal dimension, and the wavelet transform of the ultrasound RF data on a space spanned by five joint independent components. Then, we demonstrate that the obtained mixing coefficients from a group of patients can be used to train a classifier, which can be applied to characterize cancerous regions of a test patient. Results: In a leave-one-patient-out cross validation, an area under receiver operating characteristic curve of 0.93 and classification accuracy of 84% are achieved. Conclusion: Ultrasound RF time series can be used to accurately characterize prostate cancer, in vivo without the need for exhaustive search in the feature space. Significance: We use joint independent component analysis for systematic fusion of multiple sets of RF time series features, within a machine learning framework, to characterize PCa in an in vivo study.


IEEE Transactions on Medical Imaging | 2015

Statistical Biomechanical Surface Registration: Application to MR-TRUS Fusion for Prostate Interventions

Siavash Khallaghi; C. Antonio Sánchez; Abtin Rasoulian; Saman Nouranian; Cesare Romagnoli; Hamidreza Abdi; Silvia D. Chang; Peter C. Black; Larry Goldenberg; William J. Morris; Ingrid Spadinger; Aaron Fenster; Aaron D. Ward; Sidney S. Fels; Purang Abolmaesumi

A common challenge when performing surface-based registration of images is ensuring that the surfaces accurately represent consistent anatomical boundaries. Image segmentation may be difficult in some regions due to either poor contrast, low slice resolution, or tissue ambiguities. To address this, we present a novel non-rigid surface registration method designed to register two partial surfaces, capable of ignoring regions where the anatomical boundary is unclear. Our probabilistic approach incorporates prior geometric information in the form of a statistical shape model (SSM), and physical knowledge in the form of a finite element model (FEM). We validate results in the context of prostate interventions by registering pre-operative magnetic resonance imaging (MRI) to 3D transrectal ultrasound (TRUS). We show that both the geometric and physical priors significantly decrease net target registration error (TRE), leading to TREs of 2.35 ± 0.81 mm and 2.81 ± 0.66 mm when applied to full and partial surfaces, respectively. We investigate robustness in response to errors in segmentation, varying levels of missing data, and adjusting the tunable parameters. Results demonstrate that the proposed surface registration method is an efficient, robust, and effective solution for fusing data from multiple modalities.


medical image computing and computer assisted intervention | 2013

An Automatic Multi-atlas Segmentation of the Prostate in Transrectal Ultrasound Images Using Pairwise Atlas Shape Similarity

Saman Nouranian; Seyedeh Sara Mahdavi; Ingrid Spadinger; William J. Morris; Septimiu E. Salcudean; Purang Abolmaesumi

Delineation of the prostate from transrectal ultrasound images is a necessary step in several computer-assisted clinical interventions, such as low dose rate brachytherapy. Current approaches to user segmentation require user intervention and therefore it is subject to user errors. It is desirable to have a fully automatic segmentation for improved segmentation consistency and speed. In this paper, we propose a multi-atlas fusion framework to automatically segment prostate transrectal ultrasound images. The framework initially registers a dataset of a priori segmented ultrasound images to a target image. Subsequently, it uses the pairwise similarity of registered prostate shapes, which is independent of the image-similarity metric optimized during the registration process, to prune the dataset prior to the fusion and consensus segmentation step. A leave-one-out cross-validation of the proposed framework on a dataset of 50 transrectal ultrasound volumes obtained from patients undergoing brachytherapy treatment shows that the proposed is clinically robust, accurate and reproducible.


medical image computing and computer assisted intervention | 2017

Clinical Target-Volume Delineation in Prostate Brachytherapy Using Residual Neural Networks

Saman Nouranian; Seyedeh Sara Mahdavi; Ingrid Spadinger; William J. Morris; Septimiu E. Salcudean; Parvin Mousavi; Purang Abolmaesumi

Low dose-rate prostate brachytherapy is commonly used to treat early stage prostate cancer. This intervention involves implanting radioactive seeds inside a volume containing the prostate. Planning the intervention requires obtaining a series of ultrasound images from the prostate. This is followed by delineation of a clinical target volume, which mostly traces the prostate boundary in the ultrasound data, but can be modified based on institution-specific clinical guidelines. Here, we aim to automate the delineation of clinical target volume by using a new deep learning network based on residual neural nets and dilated convolution at deeper layers. In addition, we propose to include an exponential weight map in the optimization to improve local prediction. We train the network on 4,284 expert-labeled transrectal ultrasound images and test it on an independent set of 1,081 ultrasound images. With respect to the gold-standard delineation, we achieve a mean Dice similarity coefficient of 94%, a mean surface distance error of 1.05 mm and a mean Hausdorff distance error of 3.0 mm. The obtained results are statistically significantly better than two previous state-of-the-art techniques.


IEEE Transactions on Medical Imaging | 2017

Simultaneous Analysis of 2D Echo Views for Left Atrial Segmentation and Disease Detection

Gregory Allan; Saman Nouranian; Teresa Tsang; Alexander Seitel; Maryam S. Mirian; John Jue; Dale Hawley; Sarah Fleming; Kenneth Gin; Jody Swift; Robert Rohling; Purang Abolmaesumi

We propose a joint information approach for automatic analysis of 2D echocardiography (echo) data. The approach combines a priori images, their segmentations and patient diagnostic information within a unified framework to determine various clinical parameters, such as cardiac chamber volumes, and cardiac disease labels. The main idea behind the approach is to employ joint Independent Component Analysis of both echo image intensity information and corresponding segmentation labels to generate models that jointly describe the image and label space of echo patients on multiple apical views, instead of independently. These models are then both used for segmentation and volume estimation of cardiac chambers such as the left atrium and for detecting pathological abnormalities such as mitral regurgitation. We validate the approach on a large cohort of echoes obtained from 6,993 studies. We report performance of the proposed approach in estimation of the left-atrium volume and detection of mitral-regurgitation severity. A correlation coefficient of 0.87 was achieved for volume estimation of the left atrium when compared to the clinical report. Moreover, we classified patients that suffer from moderate or severe mitral regurgitation with an average accuracy of 82%.


IEEE Transactions on Medical Imaging | 2017

Correction to “Automatic Quality Assessment of Echocardiograms Using Convolutional Neural Networks: Feasibility on the Apical Four-Chamber View”

Amir H. Abdi; Christina Luong; Teresa Tsang; Gregory Allan; Saman Nouranian; John Jue; Dale Hawley; Sarah Fleming; Ken Gin; Jody Swift; Robert Rohling; Purang Abolmaesumi

Echocardiography (echo) is a skilled technical procedure that depends on the experience of the operator. The aim of this paper is to reduce user variability in data acquisition by automatically computing a score of echo quality for operator feedback. To do this, a deep convolutional neural network model, trained on a large set of samples, was developed for scoring apical four-chamber (A4C) echo. In this paper, 6,916 end-systolic echo images were manually studied by an expert cardiologist and were assigned a score between 0 (not acceptable) and 5 (excellent). The images were divided into two independent training-validation and test sets. The network architecture and its parameters were based on the stochastic approach of the particle swarm optimization on the training-validation data. The mean absolute error between the scores from the ultimately trained model and the expert’s manual scores was 0.71 ± 0.58. The reported error was comparable to the measured intra-rater reliability. The learned features of the network were visually interpretable and could be mapped to the anatomy of the heart in the A4C echo, giving confidence in the training result. The computation time for the proposed network architecture, running on a graphics processing unit, was less than 10 ms per frame, sufficient for real-time deployment. The proposed approach has the potential to facilitate the widespread use of echo at the point-of-care and enable early and timely diagnosis and treatment. Finally, the approach did not use any specific assumptions about the A4C echo, so it could be generalizable to other standard echo views.


medical image computing and computer assisted intervention | 2015

A 2D-3D Registration Framework for Freehand TRUS-Guided Prostate Biopsy

Siavash Khallaghi; C. Antonio Sánchez; Saman Nouranian; Samira Sojoudi; Silvia D. Chang; Hamidreza Abdi; Lindsay Machan; Alison C. Harris; Peter A. Black; Martin Gleave; Larry Goldenberg; Sidney S. Fels; Purang Abolmaesumi

We present a 2D to 3D registration framework that compensates for prostate motion and deformations during freehand prostate biopsies. It has two major components: 1) a trajectory-based rigid registration to account for gross motions of the prostate; and 2) a non-rigid registration constrained by a finite element model (FEM) to adjust for residual motion and deformations. For the rigid alignment, we constrain the ultrasound probe tip in the live 2D imaging plane to the tracked trajectory from the pre-procedure 3D ultrasound volume. This ensures the rectal wall approximately coincides between the images. We then apply a FEM-based technique to deform the volume based on image intensities. We validate the proposed framework on 10 prostate biopsy patients, demonstrating a mean target registration error (TRE) of 4.63 mm and 3.15 mm for rigid and FEM-based components, respectively.


international conference information processing | 2014

A System for Ultrasound-Guided Spinal Injections: A Feasibility Study

Abtin Rasoulian; Jill Osborn; Samira Sojoudi; Saman Nouranian; Victoria A. Lessoway; Robert Rohling; Purang Abolmaesumi

Facet joint injections of analgesic agents are widely used to treat patients with lower back pain, a growing problem in the adult population. The current standard-of-care for guiding the injection is fluoroscopy, but has significant drawbacks, including the significant dose of ionizing radiation. As an alternative, several ultrasound-guidance systems have been recently proposed, but have not become the standard-of-care mainly because of the difficulty in image interpretation by anesthesiologists unfamiliar with complex spinal sonography. A solution is to register a statistical spine model, learned from pre-operative images such as MRI or CT over a range of population, to the ultrasound images and display as an overlay. In particular, we introduce an ultrasound-based navigation system where the workflow is divided into two steps. Initially, prior to the injection, tracked freehand ultrasound images are acquired from the facet joint and its surrounding vertebrae. The statistical model is then instantiated and registered to those images. Next, the real-time ultrasound images are augmented with the registered model to guide the injection. Feasibility experiments are performed on ultrasound data obtained from nine patients who had prior CT images as the gold-standard for the statistical model. We present three ultrasound scanning protocols for ultrasound acquisition and quantify the error of our model.

Collaboration


Dive into the Saman Nouranian's collaboration.

Top Co-Authors

Avatar

Purang Abolmaesumi

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Septimiu E. Salcudean

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Mahdi Ramezani

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Robert Rohling

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Seyedeh Sara Mahdavi

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Samira Sojoudi

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Abtin Rasoulian

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Dale Hawley

Vancouver General Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge