Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Barbu is active.

Publication


Featured researches published by Adrian Barbu.


IEEE Transactions on Medical Imaging | 2008

Four-Chamber Heart Modeling and Automatic Segmentation for 3-D Cardiac CT Volumes Using Marginal Space Learning and Steerable Features

Yefeng Zheng; Adrian Barbu; Bogdan Georgescu; Michael Scheuering; Dorin Comaniciu

We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.


international conference on computer vision | 2007

Fast Automatic Heart Chamber Segmentation from 3D CT Data Using Marginal Space Learning and Steerable Features

Yefeng Zheng; Adrian Barbu; Bogdan Georgescu; Michael Scheuering; Dorin Comaniciu

Multi-chamber heart segmentation is a prerequisite for global quantification of the cardiac function. The complexity of cardiac anatomy, poor contrast, noise or motion artifacts makes this segmentation problem a challenging task. In this paper, we present an efficient, robust, and fully automatic segmentation method for 3D cardiac computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-dimensional similarity search problem for localizing the heart chambers. MSL reduces the number of testing hypotheses by about six orders of magnitude. We also propose to use steerable image features, which incorporate the orientation and scale information into the distribution of sampling points, thus avoiding the time-consuming volume data rotation operations. After determining the similarity transformation of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments on multi-chamber heart segmentation demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This is the first study reporting stable results on a large cardiac CT dataset with 323 volumes. In addition, we achieve a speed of less than eight seconds for automatic segmentation of all four chambers.


Proceedings of SPIE | 2009

Hierarchical parsing and semantic navigation of full body CT data

Sascha Seifert; Adrian Barbu; S. Kevin Zhou; David Liu; Johannes Feulner; Martin Huber; Michael Suehling; Alexander Cavallaro; Dorin Comaniciu

Whole body CT scanning is a common diagnosis technique for discovering early signs of metastasis or for differential diagnosis. Automatic parsing and segmentation of multiple organs and semantic navigation inside the body can help the clinician in efficiently obtaining accurate diagnosis. However, dealing with the large amount of data of a full body scan is challenging and techniques are needed for the fast detection and segmentation of organs, e.g., heart, liver, kidneys, bladder, prostate, and spleen, and body landmarks, e.g., bronchial bifurcation, coccyx tip, sternum, lung tips. Solving the problem becomes even more challenging if partial body scans are used, where not all organs are present. We propose a new approach to this problem, in which a network of 1D and 3D landmarks is trained to quickly parse the 3D CT data and estimate which organs and landmarks are present as well as their most probable locations and boundaries. Using this approach, the segmentation of seven organs and detection of 19 body landmarks can be obtained in about 20 seconds with state-of-the-art accuracy and has been validated on 80 CT full or partial body scans.


computer vision and pattern recognition | 2007

Hierarchical Learning of Curves Application to Guidewire Localization in Fluoroscopy

Adrian Barbu; Vassilis Athitsos; Bogdan Georgescu; Stefan Boehm; Peter Durlak; Dorin Comaniciu

In this paper we present a method for learning a curve model for detection and segmentation by closely integrating a hierarchical curve representation using generative and discriminative models with a hierarchical inference algorithm. We apply this method to the problem of automatic localization of the guidewire in fluoroscopic sequences. In fluoroscopic sequences, the guidewire appears as a hardly visible, non-rigid one-dimensional curve. Our paper has three main contributions. Firstly, we present a novel method to learn the complex shape and appearance of a free-form curve using a hierarchical model of curves of increasing degrees of complexity and a database of manual annotations. Secondly, we present a novel computational paradigm in the context of Marginal Space Learning, in which the algorithm is closely integrated with the hierarchical representation to obtain fast parameter inference. Thirdly, to our knowledge this is the first full system which robustly localizes the whole guidewire and has extensive validation on hundreds of frames. We present very good quantitative and qualitative results on real fluoroscopic video sequences, obtained in just one second per frame.


IEEE Transactions on Image Processing | 2009

Training an Active Random Field for Real-Time Image Denoising

Adrian Barbu

Many computer vision problems can be formulated in a Bayesian framework based on Markov random fields (MRF) or conditional random fields (CRF). Generally, the MRF/CRF model is learned independently of the inference algorithm that is used to obtain the final result. In this paper, we observe considerable gains in speed and accuracy by training the MRF/CRF model together with a fast and suboptimal inference algorithm. An active random field (ARF) is defined as a combination of a MRF/CRF based model and a fast inference algorithm for the MRF/CRF model. This combination is trained through an optimization of a loss function and a training set consisting of pairs of input images and desired outputs. We apply the ARF concept to image denoising, using the Fields of Experts MRF together with a 1-4 iteration gradient descent algorithm for inference. Experimental validation on unseen data shows that the ARF approach obtains an improved benchmark performance as well as a 1000-3000 times speedup compared to the Fields of Experts MRF. Using the ARF approach, image denoising can be performed in real-time, at 8 fps on a single CPU for a 256times256 image sequence, with close to state-of-the-art accuracy.


IEEE Transactions on Medical Imaging | 2012

Automatic Detection and Segmentation of Lymph Nodes From CT Data

Adrian Barbu; Michael Suehling; Xun Xu; David Liu; Shaohua Kevin Zhou; Dorin Comaniciu

Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.


Annals of Statistics | 2010

SPADES and mixture models

Florentina Bunea; Alexandre B. Tsybakov; Marten H. Wegkamp; Adrian Barbu

This paper studies sparse density estimation via l1 penalization (SPADES). We focus on estimation in high-dimensional mixture models and nonparametric adaptive den- sity estimation. We show, respectively, that SPADES can recover, with high probability, the unknown components of a mixture of probability densities and that it yields minimax adaptive density estimates. These results are based on a general sparsity oracle inequality that the SPADES estimates satisfy. MSC2000 Subject classification: Primary 62G08, Secondary 62C20, 62G05, 62G20


medical image computing and computer assisted intervention | 2010

Automatic detection and segmentation of axillary lymph nodes

Adrian Barbu; Michael Suehling; Xun Xu; David Liu; S. Kevin Zhou; Dorin Comaniciu

Lymph node detection and measurement is a difficult and important part of cancer treatment. In this paper we present a robust and effective learning-based method for the automatic detection of solid lymph nodes from Computed Tomography data. The contributions of the paper are the following. First, it presents a learning based approach to lymph node detection based on Marginal Space Learning. Second, it presents an efficient MRF-based segmentation method for solid lymph nodes. Third, it presents two new sets of features, one set self-aligning to the local gradients and another set based on the segmentation result. An extensive evaluation on 101 volumes containing 362 lymph nodes shows that this method obtains a 82.3% detection rate at 1 false positive per volume, with an average running time of 5-20 seconds per volume.


computer vision and pattern recognition | 2011

Learning-based hypothesis fusion for robust catheter tracking in 2D X-ray fluoroscopy

Wen Wu; Terrence Chen; Peng Wang; Shaohua Kevin Zhou; Dorin Comaniciu; Adrian Barbu; Norbert Strobel

Catheter tracking has become more and more important in recent interventional applications. It provides real time navigation for the physicians and can be used to control a motion compensated fluoro overlay reference image for other means of guidance, e.g. involving a 3D anatomical model. Tracking the coronary sinus (CS) catheter is effective to compensate respiratory and cardiac motion for 3D overlay navigation to assist positioning the ablation catheter in Atrial Fibrillation (Afib) treatments. During interventions, the CS catheter performs rapid motion and non-rigid deformation due to the beating heart and respiration. In this paper, we model the CS catheter as a set of electrodes. Novelly designed hypotheses generated by a number of learning-based detectors are fused. Robust hypothesis matching through a Bayesian framework is then used to select the best hypothesis for each frame. As a result, our tracking method achieves very high robustness against challenging scenarios such as low SNR, occlusion, foreshortening, non-rigid deformation, as well as the catheter moving in and out of ROI. Quantitative evaluation has been conducted on a database of 13221 frames from 1073 sequences. Our approach obtains 0.50mm median error and 0.76mm mean error. 97.8% of evaluated data have errors less than 2.00mm. The speed of our tracking algorithm reaches 5 frames-per-second on most data sets. Our approach is not limited to the catheters inside the CS but can be extended to track other types of catheters, such as ablation catheters or circumferential mapping catheters.


computer vision and pattern recognition | 2008

Accurate polyp segmentation for 3D CT colongraphy using multi-staged probabilistic binary learning and compositional model

Le Lu; Adrian Barbu; Matthias Wolf; Jianming Liang; Marcos Salganicoff; Dorin Comaniciu

Accurate and automatic colonic polyp segmentation and measurement in Computed Tomography (CT) has significant importance for 3D polyp detection, classification, and more generally computer aided diagnosis of colon cancers. In this paper, we propose a three-staged probabilistic binary classification approach for automatically segmenting polyp voxels from their surrounding tissues in CT. Our system integrates low-, and mid-level information for discriminative learning under local polar coordinates which align on the 3D colon surface around detected polyp. More importantly, our supervised learning system has flexible modeling capacity, which offers a principled means of encoding semantic, clinical expert annotations of colonic polyp tissue identification and segmentation. The learning generality to unseen data is bounded by boosting [12, 11] and stacked generality [14]. Extensive experimental results on polyp segmentation performance evaluation and robustness testing with disturbances (using both training data and unseen data) are provided to validate our presented approach. The reliability of polyp segmentation and measurement has been largely increased to 98:2% (ie. errors les 3 mm), compared with other state of art work [4, 15] of about 75% ~ 80%.

Collaboration


Dive into the Adrian Barbu's collaboration.

Top Co-Authors

Avatar

Song-Chun Zhu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Nathan Lay

Florida State University

View shared research outputs
Top Co-Authors

Avatar

Liangjing Ding

Florida State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge