Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fernando Vega-Higuera is active.

Publication


Featured researches published by Fernando Vega-Higuera.


IEEE Transactions on Medical Imaging | 2010

Patient-Specific Modeling and Quantification of the Aortic and Mitral Valves From 4-D Cardiac CT and TEE

Razvan Ioan Ionasec; Ingmar Voigt; Bogdan Georgescu; Yang Wang; Helene Houle; Fernando Vega-Higuera; Nassir Navab; Dorin Comaniciu

As decisions in cardiology increasingly rely on noninvasive methods, fast and precise image processing tools have become a crucial component of the analysis workflow. To the best of our knowledge, we propose the first automatic system for patient-specific modeling and quantification of the left heart valves, which operates on cardiac computed tomography (CT) and transesophageal echocardiogram (TEE) data. Robust algorithms, based on recent advances in discriminative learning, are used to estimate patient-specific parameters from sequences of volumes covering an entire cardiac cycle. A novel physiological model of the aortic and mitral valves is introduced, which captures complex morphologic, dynamic, and pathologic variations. This holistic representation is hierarchically defined on three abstraction levels: global location and rigid motion model, nonrigid landmark motion model, and comprehensive aortic-mitral model. First we compute the rough location and cardiac motion applying marginal space learning. The rapid and complex motion of the valves, represented by anatomical landmarks, is estimated using a novel trajectory spectrum learning algorithm. The obtained landmark model guides the fitting of the full physiological valve model, which is locally refined through learned boundary detectors. Measurements efficiently computed from the aortic-mitral representation support an effective morphological and functional clinical evaluation. Extensive experiments on a heterogeneous data set, cumulated to 1516 TEE volumes from 65 4-D TEE sequences and 690 cardiac CT volumes from 69 4-D CT sequences, demonstrated a speed of 4.8 seconds per volume and average accuracy of 1.45 mm with respect to expert defined ground-truth. Additional clinical validations prove the quantification precision to be in the range of inter-user variability. To the best of our knowledge this is the first time a patient-specific model of the aortic and mitral valves is automatically estimated from volumetric sequences.


medical image computing and computer assisted intervention | 2011

Detection, grading and classification of coronary stenoses in computed tomography angiography

B. Michael Kelm; Sushil Mittal; Yefeng Zheng; Alexey Tsymbal; Dominik Bernhardt; Fernando Vega-Higuera; S. Kevin Zhou; Peter Meer; Dorin Comaniciu

Recently conducted clinical studies prove the utility of Coronary Computed Tomography Angiography (CCTA) as a viable alternative to invasive angiography for the detection of Coronary Artery Disease (CAD). This has lead to the development of several algorithms for automatic detection and grading of coronary stenoses. However, most of these methods focus on detecting calcified plaques only. A few methods that can also detect and grade non-calcified plaques require substantial user involvement. In this paper, we propose a fast and fully automatic system that is capable of detecting, grading and classifying coronary stenoses in CCTA caused by all types of plaques. We propose a four-step approach including a learning-based centerline verification step and a lumen cross-section estimation step using random regression forests. We show state-of-the-art performance of our method in experiments conducted on a set of 229 CCTA volumes. With an average processing time of 1.8 seconds per case after centerline extraction, our method is significantly faster than competing approaches.


Proceedings of SPIE | 2011

Machine learning based vesselness measurement for coronary artery segmentation in cardiac CT volumes

Yefeng Zheng; Maciej Loziczonek; Bogdan Georgescu; S. Kevin Zhou; Fernando Vega-Higuera; Dorin Comaniciu

Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction methods have been proposed and most of them are based on shortest path computation given one or two end points on the artery. The major variation of the shortest path based approaches is in the different vesselness measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree (PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score to those outside. The detection score can be treated as a vesselness measurement in the computation of the shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process a large volume with a typical size of 512x512x200 voxels.


Proceedings of SPIE | 2009

Multi-Scale Feature Extraction for Learning-Based Classification of Coronary Artery Stenosis

Matthias Tessmann; Fernando Vega-Higuera; Dominik Fritz; Michael Scheuering; Günther Greiner

Assessment of computed tomography coronary angiograms for diagnostic purposes is a mostly manual, timeconsuming task demanding a high degree of clinical experience. In order to support diagnosis, a method for reliable automatic detection of stenotic lesions in computed tomography angiograms is presented. Thereby, lesions are detected by boosting-based classification. Hence, a strong classifier is trained using the AdaBoost algorithm on annotated data. Subsequently, the resulting strong classification function is used in order to detect different types of coronary lesions in previously unseen data. As pattern recognition algorithms require a description of the objects to be classified, a novel approach for feature extraction in computed tomography angiograms is introduced. By generation of cylinder segments that approximate the vessel shape at multiple scales, feature values can be extracted that adequately describe the properties of stenotic lesions. As a result of the multi-scale approach, the algorithm is capable of dealing with the variability of stenotic lesion configuration. Evaluation of the algorithm was performed on a large database containing unseen segmented centerlines from cardiac computed tomography images. Results showed that the method was able to detect stenotic cardiovascular diseases with high sensitivity and specificity. Moreover, lesion based evaluation revealed that the majority of stenosis can be reliable identified in terms of position, type and extent.


medical image computing and computer assisted intervention | 2011

Efficient detection of native and bypass coronary ostia in cardiac CT volumes: anatomical vs. pathological structures

Yefeng Zheng; Huseyin Tek; Gareth Funka-Lea; S. Kevin Zhou; Fernando Vega-Higuera; Dorin Comaniciu

Cardiac computed tomography (CT) is the primary noninvasive imaging modality to diagnose coronary artery disease. Though various methods have been proposed for coronary artery segmentation, most rely on at least one user click to provide a seed point for initialization. Automatic detection of the coronary ostia (where coronaries originate from the aorta), including both the native coronary ostia and graft ostia of the bypass coronaries, can make the whole coronary exam workflow fully automatic, therefore increasing a physicians throughput. Anatomical structures (native coronary ostia) and pathological structures (graft ostia) often require significantly different detection methods. The native coronary ostia are well constrained by the surrounding structures, therefore are detected as a global object. Detecting the graft ostia is far more difficult due to the large variation in graft position. A new searching strategy is proposed to efficiently guide the focus of analysis and, at the same time, reduce the false positive detections. Since the bypass coronaries are grafted on the ascending aorta surface, the ascending aorta is first segmented to constrain the search. The quantitative prior distribution of the graft ostia on the aorta surface is learned from a training set to significantly reduce the searching space further. Efficient local image features are extracted around each candidate point on the aorta surface to train a detector. The proposed method is computationally efficient, taking about 0.40 seconds to detect both native and graft ostia in a volume with around 512 x 512 x 200 voxels.


Journal of Cardiovascular Computed Tomography | 2014

Automated quantification of epicardial adipose tissue (EAT) in coronary CT angiography; comparison with manual assessment and correlation with coronary artery disease

Casper Mihl; D. Loeffen; Mathijs O. Versteylen; Richard A.P. Takx; Patricia J. Nelemans; Estelle C. Nijssen; Fernando Vega-Higuera; Joachim E. Wildberger; Marco Das

BACKGROUND Epicardial adipose tissue (EAT) is emerging as a risk factor for coronary artery disease (CAD). OBJECTIVE The aim of this study was to determine the applicability and efficiency of automated EAT quantification. METHODS EAT volume was assessed both manually and automatically in 157 patients undergoing coronary CT angiography. Manual assessment consisted of a short-axis-based manual measurement, whereas automated assessment on both contrast and non-contrast-enhanced data sets was achieved through novel prototype software. Duration of both quantification methods was recorded, and EAT volumes were compared with paired samples t test. Correlation of volumes was determined with intraclass correlation coefficient; agreement was tested with Bland-Altman analysis. The association between EAT and CAD was estimated with logistic regression. RESULTS Automated quantification was significantly less time consuming than automated quantification (17 ± 2 seconds vs 280 ± 78 seconds; P < .0001). Although manual EAT volume differed significantly from automated EAT volume (75 ± 33 cm(³) vs 95 ± 45 cm(³); P < .001), a good correlation between both assessments was found (r = 0.76; P < .001). For all methods, EAT volume was positively associated with the presence of CAD. Stronger predictive value for the severity of CAD was achieved through automated quantification on both contrast-enhanced and non-contrast-enhanced data sets. CONCLUSION Automated EAT quantification is a quick method to estimate EAT and may serve as a predictor for CAD presence and severity.


Proceedings of SPIE | 2009

Left Ventricle Endocardium Segmentation for Cardiac CT Volumes Using an Optimal Smooth Surface

Yefeng Zheng; Bogdan Georgescu; Fernando Vega-Higuera; Dorin Comaniciu

We recently proposed a robust heart chamber segmentation approach based on marginal space learning. In this paper, we focus on improving the LV endocardium segmentation accuracy by searching for an optimal smooth mesh that tightly encloses the whole blood pool. The refinement procedure is formulated as an optimization problem: maximizing the surface smoothness under the tightness constraint. The formulation is a convex quadratic programming problem, therefore has a unique global optimum and can be solved efficiently. Our approach has been validated on the largest cardiac CT dataset (457 volumes from 186 patients) ever reported. Compared to our previous work, it reduces the mean point-to-mesh error from 1.13 mm to 0.84 mm (22% improvement). Additionally, the system has been extensively tested on a dataset with 2000+ volumes without any major failure.


computer vision and pattern recognition | 2012

Segmentation and removal of pulmonary arteries, veins and left atrial appendage for visualizing coronary and bypass arteries

Hua Zhong; Yefeng Zheng; Gareth Funka-Lea; Fernando Vega-Higuera

In this paper we present an automatic heart segmentation system for helping the diagnosis of the coronary artery diseases (CAD). The goal is to visualize the heart from a cardiac CT image with pulmonary veins, pulmonary arteries and left atrial appendage removed so that doctors can clearly see major coronary artery trees, aorta and bypass arteries if exist. The system combines model-based detection framwork with data-driven post-refinements to create voxel-based heart mask for the visualization. The marginal space learning [6] algorithm is used to detect mesh or landmark models of different heart anatomies in the CT image. Guided by such detected models, local data-driven refinements are added to produce precise boundaries of the heart mask. The system is fully automatic and can process a 3D cardiac CT volume within 5 seconds.


international conference on machine learning | 2010

Fast and automatic heart isolation in 3D CT volumes: optimal shape initialization

Yefeng Zheng; Fernando Vega-Higuera; Shaohua Kevin Zhou; Dorin Comaniciu

Heart isolation (separating the heart from the proximity tissues, e.g., lung, liver, and rib cage) is a prerequisite to clearly visualize the coronary arteries in 3D. Such a 3D visualization provides an intuitive view to physicians to diagnose suspicious coronary segments. Heart isolation is also necessary in radiotherapy planning to mask out the heart for the treatment of lung or liver tumors. In this paper, we propose an efficient and robust method for heart isolation in computed tomography (CT) volumes. Marginal space learning (MSL) is used to efficiently estimate the position, orientation, and scale of the heart. An optimal mean shape (which optimally represents the whole shape population) is then aligned with detected pose, followed by boundary refinement using a learning-based boundary detector. Post-processing is further exploited to exclude the rib cage from the heart mask. A large-scale experiment on 589 volumes (including both contrasted and non-contrasted scans) from 288 patients demonstrates the robustness of the approach. It achieves a mean point-to-mesh error of 1.91 mm. Running at a speed of 1.5 s/volume, it is at least 10 times faster than the previous methods.


international conference on machine learning | 2010

Fast automatic detection of calcified coronary lesions in 3d cardiac CT images

Sushil Mittal; Yefeng Zheng; Bogdan Georgescu; Fernando Vega-Higuera; Shaohua Kevin Zhou; Peter Meer; Dorin Comaniciu

Even with the recent advances in multidetector computed tomography (MDCT) imaging techniques, detection of calcified coronary lesions remains a highly tedious task. Noise, blooming and motion artifacts etc. add to its complication. We propose a novel learning-based, fully automatic algorithm for detection of calcified lesions in contrast-enhanced CT data. We compare and evaluate the performance of two supervised learning methods. Both these methods use rotation invariant features that are extracted along the centerline of the coronary. Our approach is quite robust to the estimates of the centerline and works well in practice. We are able to achieve average detection times of 0.67 and 0.82 seconds per volume using the two methods.

Collaboration


Dive into the Fernando Vega-Higuera's collaboration.

Researchain Logo
Decentralizing Knowledge