Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ozan Oktay is active.

Publication


Featured researches published by Ozan Oktay.


medical image computing and computer assisted intervention | 2016

Multi-input Cardiac Image Super-Resolution Using Convolutional Neural Networks

Ozan Oktay; Wenjia Bai; Matthew C. H. Lee; Ricardo Guerrero; Konstantinos Kamnitsas; Jose Caballero; Antonio de Marvao; Stuart A. Cook; Declan P. O’Regan; Daniel Rueckert

3D cardiac MR imaging enables accurate analysis of cardiac morphology and physiology. However, due to the requirements for long acquisition and breath-hold, the clinical routine is still dominated by multi-slice 2D imaging, which hamper the visualization of anatomy and quantitative measurements as relatively thick slices are acquired. As a solution, we propose a novel image super-resolution (SR) approach that is based on a residual convolutional neural network (CNN) model. It reconstructs high resolution 3D volumes from 2D image stacks for more accurate image analysis. The proposed model allows the use of multiple input data acquired from different viewing planes for improved performance. Experimental results on 1233 cardiac short and long-axis MR image stacks show that the CNN model outperforms state-of-the-art SR methods in terms of image quality while being computationally efficient. Also, we show that image segmentation and motion tracking benefits more from SR-CNN when it is used as an initial upscaling method than conventional interpolation methods for the subsequent analysis.


IEEE Transactions on Medical Imaging | 2018

Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation

Ozan Oktay; Enzo Ferrante; Konstantinos Kamnitsas; Mattias P. Heinrich; Wenjia Bai; Jose Caballero; Stuart A. Cook; Antonio de Marvao; Timothy Dawes; Declan O'Regan; Bernhard Kainz; Ben Glocker; Daniel Rueckert

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.


medical image computing and computer assisted intervention | 2013

Biomechanically Driven Registration of Pre- to Intra-Operative 3D Images for Laparoscopic Surgery

Ozan Oktay; Li Zhang; Tommaso Mansi; Peter Mountney; Philip Mewes; Stéphane Nicolau; Luc Soler; Christophe Chefd’hotel

Minimally invasive laparoscopic surgery is widely used for the treatment of cancer and other diseases. During the procedure, gas insufflation is used to create space for laparoscopic tools and operation. Insufflation causes the organs and abdominal wall to deform significantly. Due to this large deformation, the benefit of surgical plans, which are typically based on pre-operative images, is limited for real time navigation. In some recent work, intra-operative images, such as cone-beam CT or interventional CT, are introduced to provide updated volumetric information after insufflation. Other works in this area have focused on simulation of gas insufflation and exploited only the pre-operative images to estimate deformation. This paper proposes a novel registration method for pre- and intra-operative 3D image fusion for laparoscopic surgery. In this approach, the deformation of pre-operative images is driven by a biomechanical model of the insufflation process. The proposed method was validated by five synthetic data sets generated from clinical images and three pairs of in vivo CT scans acquired from two pigs, before and after insufflation. The results show the proposed method achieved high accuracy for both the synthetic and real insufflation data.


IEEE Transactions on Medical Imaging | 2017

Stratified Decision Forests for Accurate Anatomical Landmark Localization in Cardiac Images

Ozan Oktay; Wenjia Bai; Ricardo Guerrero; Martin Rajchl; Antonio de Marvao; Declan O'Regan; Stuart A. Cook; Mattias P. Heinrich; Ben Glocker; Daniel Rueckert

Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and/or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D high-resolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-the-art landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy.Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and/or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D high-resolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-the-art landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy.


medical image computing and computer assisted intervention | 2015

Structured Decision Forests for Multi-modal Ultrasound Image Registration

Ozan Oktay; Andreas Schuh; Martin Rajchl; Kevin Keraudren; Alberto Gómez; Mattias P. Heinrich; Graeme P. Penney; Daniel Rueckert

Interventional procedures in cardiovascular diseases often require ultrasound (US) image guidance. These US images must be combined with pre-operatively acquired tomographic images to provide a roadmap for the intervention. Spatial alignment of pre-operative images with intra-operative US images can provide valuable clinical information. Existing multi-modal US registration techniques often do not achieve reliable registration due to low US image quality. To address this problem, a novel medical image representation based on a trained decision forest named probabilistic edge map (PEM) is proposed in this paper. PEMs are generic and modality-independent. They generate similar anatomical representations from different imaging modalities and can thus guide a multi-modal image registration algorithm more robustly and accurately. The presented image registration framework is evaluated on a clinical dataset consisting of 10 pairs of 3D US-CT and 7 pairs of 3D US-MR cardiac images. The experiments show that a registration based on PEMs is able to estimate more reliable and accurate inter-modality correspondences compared to other state-of-the-art US registration methods.


medical image computing and computer assisted intervention | 2015

Automated Localization of Fetal Organs in MRI Using Random Forests with Steerable Features

Kevin Keraudren; Bernhard Kainz; Ozan Oktay; Vanessa Kyriakopoulou; Mary A. Rutherford; Joseph V. Hajnal; Daniel Rueckert

Fetal MRI is an invaluable diagnostic tool complementary to ultrasound thanks to its high contrast and resolution. Motion artifacts and the arbitrary orientation of the fetus are two main challenges of fetal MRI. In this paper, we propose a method based on Random Forests with steerable features to automatically localize the heart, lungs and liver in fetal MRI. During training, all MR images are mapped into a standard coordinate system that is defined by landmarks on the fetal anatomy and normalized for fetal age. Image features are then extracted in this coordinate system. During testing, features are computed for different orientations with a search space constrained by previously detected landmarks. The method was tested on healthy fetuses as well as fetuses with intrauterine growth restriction (IUGR) from 20 to 38 weeks of gestation. The detection rate was above 90% for all organs of healthy fetuses in the absence of motion artifacts. In the presence of motion, the detection rate was 83% for the heart, 78% for the lungs and 67% for the liver. Growth restriction did not decrease the performance of the heart detection but had an impact on the detection of the lungs and liver. The proposed method can be used to initialize subsequent processing steps such as segmentation or motion correction, as well as automatically orient the 3D volume based on the fetal anatomy to facilitate clinical examination.


NeuroImage: Clinical | 2018

White matter hyperintensity and stroke lesion segmentation and differentiation using convolutional neural networks

Ricardo Guerrero; Chen Qin; Ozan Oktay; Christopher Bowles; Liang Chen; R. Joules; R. Wolz; Maria del C. Valdés-Hernández; David Alexander Dickie; Joanna M. Wardlaw; Daniel Rueckert

White matter hyperintensities (WMH) are a feature of sporadic small vessel disease also frequently observed in magnetic resonance images (MRI) of healthy elderly subjects. The accurate assessment of WMH burden is of crucial importance for epidemiological studies to determine association between WMHs, cognitive and clinical data; their causes, and the effects of new treatments in randomized trials. The manual delineation of WMHs is a very tedious, costly and time consuming process, that needs to be carried out by an expert annotator (e.g. a trained image analyst or radiologist). The problem of WMH delineation is further complicated by the fact that other pathological features (i.e. stroke lesions) often also appear as hyperintense regions. Recently, several automated methods aiming to tackle the challenges of WMH segmentation have been proposed. Most of these methods have been specifically developed to segment WMH in MRI but cannot differentiate between WMHs and strokes. Other methods, capable of distinguishing between different pathologies in brain MRI, are not designed with simultaneous WMH and stroke segmentation in mind. Therefore, a task specific, reliable, fully automated method that can segment and differentiate between these two pathological manifestations on MRI has not yet been fully identified. In this work we propose to use a convolutional neural network (CNN) that is able to segment hyperintensities and differentiate between WMHs and stroke lesions. Specifically, we aim to distinguish between WMH pathologies from those caused by stroke lesions due to either cortical, large or small subcortical infarcts. The proposed fully convolutional CNN architecture, called uResNet, that comprised an analysis path, that gradually learns low and high level features, followed by a synthesis path, that gradually combines and up-samples the low and high level features into a class likelihood semantic segmentation. Quantitatively, the proposed CNN architecture is shown to outperform other well established and state-of-the-art algorithms in terms of overlap with manual expert annotations. Clinically, the extracted WMH volumes were found to correlate better with the Fazekas visual rating score than competing methods or the expert-annotated volumes. Additionally, a comparison of the associations found between clinical risk-factors and the WMH volumes generated by the proposed method, was found to be in line with the associations found with the expert-annotated volumes.


Revised Selected Papers of the 6th International Workshop on Statistical Atlases and Computational Models of the Heart. Imaging and Modelling Challenges - Volume 9534 | 2015

Beyond the AHA 17-Segment Model: Motion-Driven Parcellation of the Left Ventricle

Wenjia Bai; Devis Peressutti; Sarah Parisot; Ozan Oktay; Martin Rajchl; Declan O'Regan; Stuart A. Cook; Andrew P. King; Daniel Rueckert

A major challenge for cardiac motion analysis is the high-dimensionality of the motion data. Conventionally, the AHA model is used for dimensionality reduction, which divides the left ventricle into 17 segments using criteria based on anatomical structures. In this paper, a novel method is proposed to divide the left ventricle into homogeneous parcels in terms of motion trajectories. We demonstrate that the motion-driven parcellation has good reproducibility and use it for data reduction and motion description on a dataset of 1093 subjects. The resulting motion descriptor achieves high performance on two exemplar applications, namely gender and age predictions. The proposed method has the potential to be applied to groupwise motion analysis.


medical image computing and computer-assisted intervention | 2017

Semi-supervised Learning for Network-Based Cardiac MR Image Segmentation.

Wenjia Bai; Ozan Oktay; Matthew Sinclair; Hideaki Suzuki; Martin Rajchl; Giacomo Tarroni; Ben Glocker; Andrew P. King; Paul M. Matthews; Daniel Rueckert

Training a fully convolutional network for pixel-wise (or voxel-wise) image segmentation normally requires a large number of training images with corresponding ground truth label maps. However, it is a challenge to obtain such a large training set in the medical imaging domain, where expert annotations are time-consuming and difficult to obtain. In this paper, we propose a semi-supervised learning approach, in which a segmentation network is trained from both labelled and unlabelled data. The network parameters and the segmentations for the unlabelled data are alternately updated. We evaluate the method for short-axis cardiac MR image segmentation and it has demonstrated a high performance, outperforming a baseline supervised method. The mean Dice overlap metric is 0.92 for the left ventricular cavity, 0.85 for the myocardium and 0.89 for the right ventricular cavity. It also outperforms a state-of-the-art multi-atlas segmentation method by a large margin and the speed is substantially faster.


Journal of Cardiovascular Magnetic Resonance | 2018

Automated cardiovascular magnetic resonance image analysis with fully convolutional networks

Wenjia Bai; Matthew Sinclair; Giacomo Tarroni; Ozan Oktay; Martin Rajchl; Ghislain Vaillant; Aaron M. Lee; Nay Aung; Elena Lukaschuk; Mihir M. Sanghvi; Filip Zemrak; Kenneth Fung; José Miguel Paiva; Valentina Carapella; Young Jin Kim; Hideaki Suzuki; Bernhard Kainz; Paul M. Matthews; Steffen E. Petersen; Stefan K Piechnik; Stefan Neubauer; Ben Glocker; Daniel Rueckert

BackgroundCardiovascular resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images.MethodsDeep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV) end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV).ResultsBy combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. On a short-axis image test set of 600 subjects, it achieves an average Dice metric of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity. The mean absolute difference between automated measurement and manual measurement is 6.1 mL for LVEDV, 5.3 mL for LVESV, 6.9 gram for LVM, 8.5 mL for RVEDV and 7.2 mL for RVESV. On long-axis image test sets, the average Dice metric is 0.93 for the LA cavity (2-chamber view), 0.95 for the LA cavity (4-chamber view) and 0.96 for the RA cavity (4-chamber view). The performance is comparable to human inter-observer variability.ConclusionsWe show that an automated method achieves a performance on par with human experts in analysing CMR images and deriving clinically relevant measures.

Collaboration


Dive into the Ozan Oktay's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenjia Bai

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Ben Glocker

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stuart A. Cook

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge