Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neil Birkbeck is active.

Publication


Featured researches published by Neil Birkbeck.


international conference on computer vision | 2007

3D Variational Brain Tumor Segmentation using a High Dimensional Feature Set

Dana Cobzas; Neil Birkbeck; Mark W. Schmidt; Martin Jagersand; Albert Murtha

Tumor segmentation from MRI data is an important but time consuming task performed manually by medical experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue, among different patients and, in many cases, similarity between tumor and normal tissue. One other challenge is how to make use of prior information about the appearance of normal brain. In this paper we propose a variational brain tumor segmentation algorithm that extends current approaches from texture segmentation by using a high dimensional feature set calculated from MRI data and registered atlases. Using manually segmented data we learn a statistical model for tumor and normal tissue. We show that using a conditional model to discriminate between normal and abnormal regions significantly improves the segmentation results compared to traditional generative models. Validation is performed by testing the method on several cancer patient MRI scans.


medical image computing and computer assisted intervention | 2011

Automatic multi-organ segmentation using learning-based segmentation and level set optimization

Timo Kohlberger; Michal Sofka; Jingdan Zhang; Neil Birkbeck; Jens Wetzl; Jens N. Kaftan; Jerome Declerck; S. Kevin Zhou

We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.


information processing in medical imaging | 2013

Rapid multi-organ segmentation using context integration and discriminative models

Nathan Lay; Neil Birkbeck; Jingdan Zhang; S. Kevin Zhou

We propose a novel framework for rapid and accurate segmentation of a cohort of organs. First, it integrates local and global image context through a product rule to simultaneously detect multiple landmarks on the target organs. The global posterior integrates evidence over all volume patches, while the local image context is modeled with a local discriminative classifier. Through non-parametric modeling of the global posterior, it exploits sparsity in the global context for efficient detection. The complete surface of the target organs is then inferred by robust alignment of a shape model to the resulting landmarks and finally deformed using discriminative boundary detectors. Using our approach, we demonstrate efficient detection and accurate segmentation of liver, kidneys, heart, and lungs in challenging low-resolution MR data in less than one second, and of prostate, bladder, rectum, and femoral heads in CT scans, in roughly one to three seconds and in both cases with accuracy fairly close to inter-user variability.


medical image computing and computer assisted intervention | 2011

Multi-stage learning for robust lung segmentation in challenging CT volumes

Michal Sofka; Jens Wetzl; Neil Birkbeck; Jingdan Zhang; Timo Kohlberger; Jens N. Kaftan; Jerome Declerck; S. Kevin Zhou

Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.


european conference on computer vision | 2006

Variational shape and reflectance estimation under changing light and viewpoints

Neil Birkbeck; Dana Cobzas; Peter F. Sturm; Martin Jagersand

Fitting parameterized 3D shape and general reflectance models to 2D image data is challenging due to the high dimensionality of the problem. The proposed method combines the capabilities of classical and photometric stereo, allowing for accurate reconstruction of both textured and non-textured surfaces. In particular, we present a variational method implemented as a PDE-driven surface evolution interleaved with reflectance estimation. The surface is represented on an adaptive mesh allowing topological change. To provide the input data, we have designed a capture setup that simultaneously acquires both viewpoint and light variation while minimizing self-shadowing. Our capture method is feasible for real-world application as it requires a moderate amount of input data and processing time. In experiments, models of people and everyday objects were captured from a few dozen images taken with a consumer digital camera. The capture process recovers a photo-consistent model of spatially varying Lambertian and specular reflectance and a highly accurate geometry.


medical image computing and computer-assisted intervention | 2012

Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory

Chao Lu; Yefeng Zheng; Neil Birkbeck; Jingdan Zhang; Timo Kohlberger; Christian Tietjen; Thomas Boettger; James S. Duncan; S. Kevin Zhou

In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.


workshop on applications of computer vision | 2009

An interactive graph cut method for brain tumor segmentation

Neil Birkbeck; Dana Cobzas; Martin Jagersand; Albert Murtha; Tibor Kesztyues

Tumor segmentation from MRI data is an important but time consuming task performed manually by medical experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue among different patients and, in many cases, similarity between tumor and normal tissue. We propose a semi-automatic interactive brain tumor segmentation system that incorporates 2D interactive and 3D automatic tools with the ability to adjust operator control. The provided methods are based on an energy that incorporates region statistics computed on available MRI modalities and the usual regularization term. The energy is efficiently minimized on-line using graph cut. Experiments with radiation oncologists testing the semi-automatic tool vs. a manual tool show that the proposed system improves both segmentation time and repeatability.


international symposium on biomedical imaging | 2011

Integrated Detection Network (IDN) for pose and boundary estimation in medical images

Michal Sofka; Kristof Ralovich; Neil Birkbeck; Jingdan Zhang; S. Kevin Zhou

The expanding role of complex object detection algorithms introduces a need for flexible architectures that simplify interfacing with machine learning techniques and offer easy-to-use training and detection procedures. To address this need, the Integrated Detection Network (IDN) proposes a conceptual design for rapid prototyping of object and boundary detection systems. The IDN uses a strong spatial prior present in the medical imaging domain and a large annotated database of images to train robust detectors. The best detection hypotheses are propagated throughout the detection network using sequential sampling techniques. The effectiveness of the IDN is demonstrated on two learning-based algorithms: (1) automatic detection of fetal brain structures in ultrasound volumes, and (2) liver boundary detection in MRI volumes. Modifying the detection pipeline is simple and allows for immediate adaptation to the variations of the desired algorithms. Both systems achieved low detection error (3.09 and 4.20 mm for two brain structures and 2.53 mm for boundary).


canadian conference on computer and robot vision | 2008

Realtime Visualization of Monocular Data for 3D Reconstruction

Adam Rachmielowski; Neil Birkbeck; Martin Jagersand; Dana Cobzas

Methods for reconstructing photorealistic 3D graphics models from images or video are appealing applications of computer vision. Such methods rely on good input image data, but the lack of user feedback during image acquisition often leads to incomplete or poorly sampled reconstruction results. We describe a video-based system that constructs and visualizes a coarse graphics model in real-time and automatically saves a set of images appropriate for later offline dense reconstruction. Visualization of the model during image acquisition allows the operator to interactively verify that an adequate set of input images has been collected for the modeling task, while automatic image selection keeps storage requirements to a minimum. Our implementation uses real-time monocular SLAM to compute and continuously keep extending a 3D model, augments this with keyframe selection for storage, surface modelling, and on-line rendering of the current structure textured from a selection of key-frames. This rendering gives an immediate and intuitive view of both the geometry and if suitable viewpoints of texture images have already been captured.


international conference on robotics and automation | 2010

Performance evaluation of monocular predictive display

Adam Rachmielowski; Neil Birkbeck; Martin Jagersand

In teleoperation systems, operator performance is negatively affected by time-delayed visual feedback. Predictive display (PD) compensates for delays by providing synthesized visual feedback. While most existing PD methods rely on a priori models (e.g., from laser range finding or stereo vision), recent work on monocular SLAM and SFM makes it possible to acquire PD models in single camera applications. In this work, we evaluate operator performance of PD visual feedback based on a coarse 3D model. We report the experimental results of 12 human tele-operators each performing 96 visual alignment tasks with a 300ms delay. Four operating modes are considered: delayed video (no PD), video-based PD using a stabilizing plane (homography), 3D model-based PD, and no delay (ground truth). The results indicate that vision-based PD (both plane and 3D model-based) is significantly better than delayed video. It reduced task completion time 40% and is nearly as good as the no delay condition. PD based on a sparse a 3D model was somewhat better than the simpler plane based method.

Collaboration


Dive into the Neil Birkbeck's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge