Amir H. Abdi
University of British Columbia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amir H. Abdi.
medical image computing and computer assisted intervention | 2017
Amir H. Abdi; Christina Luong; Teresa Tsang; John Jue; Ken Gin; Darwin Yeung; Dale Hawley; Robert Rohling; Purang Abolmaesumi
Echocardiography (echo) is a clinical imaging technique which is highly dependent on operator experience. We aim to reduce operator variability in data acquisition by automatically computing an echo quality score for real-time feedback. We achieve this with a deep neural network model, with convolutional layers to extract hierarchical features from the input echo cine and recurrent layers to leverage the sequential information in the echo cine loop. Using data from 509 separate patient studies, containing 2,450 echo cines across five standard echo imaging planes, we achieved a mean quality score accuracy of 85\(\%\) compared to the gold-standard score assigned by experienced echosonographers. The proposed approach calculates the quality of a given 20 frame echo sequence within 10 ms, sufficient for real-time deployment.
IEEE Transactions on Medical Imaging | 2017
Amir H. Abdi; Christina Luong; Teresa Tsang; Gregory Allan; Saman Nouranian; John Jue; Dale Hawley; Sarah Fleming; Ken Gin; Jody Swift; Robert Rohling; Purang Abolmaesumi
Echocardiography (echo) is a skilled technical procedure that depends on the experience of the operator. The aim of this paper is to reduce user variability in data acquisition by automatically computing a score of echo quality for operator feedback. To do this, a deep convolutional neural network model, trained on a large set of samples, was developed for scoring apical four-chamber (A4C) echo. In this paper, 6,916 end-systolic echo images were manually studied by an expert cardiologist and were assigned a score between 0 (not acceptable) and 5 (excellent). The images were divided into two independent training-validation and test sets. The network architecture and its parameters were based on the stochastic approach of the particle swarm optimization on the training-validation data. The mean absolute error between the scores from the ultimately trained model and the expert’s manual scores was 0.71 ± 0.58. The reported error was comparable to the measured intra-rater reliability. The learned features of the network were visually interpretable and could be mapped to the anatomy of the heart in the A4C echo, giving confidence in the training result. The computation time for the proposed network architecture, running on a graphics processing unit, was less than 10 ms per frame, sufficient for real-time deployment. The proposed approach has the potential to facilitate the widespread use of echo at the point-of-care and enable early and timely diagnosis and treatment. Finally, the approach did not use any specific assumptions about the A4C echo, so it could be generalizable to other standard echo views.
Proceedings of SPIE | 2017
Amir H. Abdi; Christina Luong; Teresa Tsang; Gregory Allan; Saman Nouranian; John Jue; Dale Hawley; Sarah Fleming; Ken Gin; Jody Swift; Robert Rohling; Purang Abolmaesumi
Echocardiography (echo) is the most common test for diagnosis and management of patients with cardiac condi- tions. While most medical imaging modalities benefit from a relatively automated procedure, this is not the case for echo and the quality of the final echo view depends on the competency and experience of the sonographer. It is not uncommon that the sonographer does not have adequate experience to adjust the transducer and acquire a high quality echo, which may further affect the clinical diagnosis. In this work, we aim to aid the operator during image acquisition by automatically assessing the quality of the echo and generating the Automatic Echo Score (AES). This quality assessment method is based on a deep convolutional neural network, trained in an end-to-end fashion on a large dataset of apical four-chamber (A4C) echo images. For this project, an expert car- diologist went through 2,904 A4C images obtained from independent studies and assessed their condition based on a 6-scale grading system. The scores assigned by the expert ranged from 0 to 5. The distribution of scores among the 6 levels were almost uniform. The network was then trained on 80% of the data (2,345 samples). The average absolute error of the trained model in calculating the AES was 0.8 ± 0:72. The computation time of the GPU implementation of the neural network was estimated at 5 ms per frame, which is sufficient for real-time deployment.
Journal of Biomechanics | 2017
Amir H. Abdi; A.G. Hannam; Ian Stavness; Sidney S. Fels
Some of the jaw tracking methods may be limited in terms of their accuracy or clinical applicability. This article introduces the sphere-based registration method to minimize the fiducial (reference landmark) localization error (FLE) in tracking and coregistration of physical and virtual dental models, to enable an effective clinical analysis of the patients masticatory functions. In this method, spheres (registration fiducials) are placed on the corresponding polygonal concavities of the physical and virtual dental models based on the geometrical principle that establishes a unique spatial position for a sphere inside an infinite trihedron. The experiments in this study were implemented using an optical system which tracked active tracking markers connected to the upper and lower dental casts. The accuracy of the tracking workflow was confirmed in vitro, based on comparing virtually calculated interocclusal regions of close proximity against the physical interocclusal impressions. The target registration error of the tracking was estimated based on the leave-one-sphere-out method to be the sum of the error of the sensors, i.e., the FLE was negligible. Moreover, based on a user study, the FLE of the proposed method was confirmed to be 5 and 10 times smaller than the FLE of conventional fiducial selections on the physical and virtual models, respectively. The proposed tracking method is non-invasive and appears to be sufficiently accurate. To conclude, the proposed registration and tracking principles can be extended to track any biomedical and non-biomedical geometries that contain polygonal concavities.
DLMIA/ML-CDS@MICCAI | 2017
Fatemeh Taheri Dezaki; Neeraj Dhungel; Amir H. Abdi; Christina Luong; Teresa Tsang; John Jue; Ken Gin; Dale Hawley; Robert Rohling; Purang Abolmaesumi
Characterisation of cardiac cycle phase in echocardiography data is a necessary preprocessing step for developing automated systems that measure various cardiac parameters. Accurate characterisation is challenging, due to differences in appearance of the cardiac anatomy and the variability of heart rate in individuals. Here, we present a method for automatic recognition of cardiac cycle phase from echocardiograms by using a new deep neural networks architecture. Specifically, we propose to combine deep residual neural networks (ResNets), which extract the hierarchical features from the individual echocardiogram frames, with recurrent neural networks (RNNs), which model the temporal dependencies between sequential frames. We demonstrate that such new architecture produces results that outperform baseline architecture for the automatic characterisation of cardiac cycle phase in large datasets of echocardiograms containing different levels of pathological conditions.
computer assisted radiology and surgery | 2018
Amir H. Abdi; A.G. Hannam; Sidney S. Fels
PurposeMagnetic resonance imaging (MRI) is widely used in study of maxillofacial structures. While MRI is the modality of choice for soft tissues, it fails to capture hard tissues such as bone and teeth. Virtual dental models, acquired by optical 3D scanners, are becoming more accessible for dental practice and are starting to replace the conventional dental impressions. The goal of this research is to fuse the high-resolution 3D dental models with MRI to enhance the value of imaging for applications where detailed analysis of maxillofacial structures are needed such as patient examination, surgical planning, and modeling.MethodsA subject-specific dental attachment was digitally designed and 3D printed based on the subject’s face width and dental anatomy. The attachment contained 19 semi-ellipsoidal concavities in predetermined positions where oil-based ellipsoidal fiducial markers were later placed. The MRI was acquired while the subject bit on the dental attachment. The spatial position of the center of mass of each fiducial in the resultant MR Image was calculated by averaging its voxels’ spatial coordinates. The rigid transformation to fuse dental models to MRI was calculated based on the least squares mapping of corresponding fiducials and solved via singular-value decomposition.ResultsThe target registration error (TRE) of the proposed fusion process, calculated in a leave-one-fiducial-out fashion, was estimated at 0.49 mm. The results suggest that 6–9 fiducials suffice to achieve a TRE of equal to half the MRI voxel size.ConclusionEllipsoidal oil-based fiducials produce distinguishable intensities in MRI and can be used as registration fiducials. The achieved accuracy of the proposed approach is sufficient to leverage the merged 3D dental models with the MRI data for a finer analysis of the maxillofacial structures where complete geometry models are needed.
POCUS/BIVPCS/CuRIOUS/CPM@MICCAI | 2018
Nathan Van Woudenberg; Zhibin Liao; Amir H. Abdi; Hani Girgis; Christina Luong; Hooman Vaseli; Delaram Behnami; Haotian Zhang; Kenneth Gin; Robert Rohling; Teresa Tsang; Purang Abolmaesumi
Accurate diagnosis in cardiac ultrasound requires high quality images, containing different specific features and structures depending on which of the 14 standard cardiac views the operator is attempting to acquire. Inexperienced operators can have a great deal of difficulty recognizing these features and thus can fail to capture diagnostically relevant heart cines. This project aims to mitigate this challenge by providing operators with real-time feedback in the form of view classification and quality estimation. Our system uses a frame grabber to capture the raw video output of the ultrasound machine, which is then fed into an Android mobile device, running a customized mobile implementation of the TensorFlow inference engine. By multi-threading four TensorFlow instances together, we are able to run the system at 30 Hz with a latency of under 0.4 s.
Korean Journal of Orthodontics | 2018
Amir H. Abdi; Saeed Reza Motamedian; Ehsan Balaghi; Mahtab Nouri
Objective The aim of this study is to compare the adaptation of a straight wire between brackets positioned at the mid-lingual surface and those placed gingivally by using a three-dimensional simulation software. Methods This cross-sectional study was performed using OrthoAid, an in-house software. The subjects were 36 adolescents with normal Class I occlusion. For each dental cast, two bracket positioning approaches, namely the middle and gingival, were examined. In the middle group, the reference points were placed on the mid-lingual surface of each tooth, while in the gingival group, the reference points were positioned lingually on the anterior teeth. A 4th degree polynomial was adopted, and the in-plane and off-plane root mean squares (RMSs) of the distances between the reference points and the fitted polynomial curve were calculated using the software. Statistical analysis was performed using the paired-samples t-test (α = 0.05). Results The mean in-plane RMS of the polynomial curve to the bracket distance in the gingival group was significantly lower than that in the middle group (p < 0.001). The off-plane RMS was higher in the gingivally positioned brackets in the maxilla than in the middle group (p < 0.001). However, the off-plane RMS in mandible was not statistically significantly different between the two groups (p = 0.274). Conclusions The results demonstrated that the gingival placement of lingual brackets on the anterior teeth could decrease the distance between a tooth and the straight wire.
DLMIA/ML-CDS@MICCAI | 2018
Delaram Behnami; Christina Luong; Hooman Vaseli; Amir H. Abdi; Hany Girgis; Dale Hawley; Robert Rohling; Ken Gin; Purang Abolmaesumi; Teresa S.M. Tsang
Heart disease is the global leading cause of death. A key predictor of heart failure and the most commonly measured cardiac parameter is left ventricular ejection fraction (LVEF). Despite available segmentation technologies, experienced cardiologists often rely on visual estimation of LVEF for a swift assessment. In this paper, we present a direct dual-channel LVEF estimation approach that mimics cardiologists’ visual assessment for detecting patients with high risk of systolic heart failure. The proposed framework consists of various layers for extracting spatial and temporal features from echocardiography (echo) cines. A data set of 1,186 apical two-chamber (A2C) and four-chamber (A4C) echo cines were used in this study. LVEF labels were assigned based on risk of heart failure: high-risk for \(\text {LVEF}\le 40\%\) and low-risk for \(40\%<\text {LVEF}\le 75\%\). We validated the proposed framework on 237 clinical exams and achieved a success rate of 83.1% for risk-based LVEF classification. Our experiments suggests the fusion of the two apical views improves the performance, compared to single-view networks, especially A2C. The proposed solution is promising for segmentation-free detection of high-risk LVEF. Direct LVEF estimation eliminates ventricle segmentation, and can hence be a useful tool for formal echo and point-of-care cardiac ultrasound.
arXiv: Learning | 2018
Amir H. Abdi; Pramit Saha; Praneeth Srungarapu; Sidney S. Fels