Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Konstantinos Kamnitsas is active.

Publication


Featured researches published by Konstantinos Kamnitsas.


Medical Image Analysis | 2017

Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation

Konstantinos Kamnitsas; Christian Ledig; Virginia Newcombe; Joanna P. Simpson; Andrew D. Kane; David K. Menon; Daniel Rueckert; Ben Glocker

HIGHLIGHTSAn efficient 11‐layers deep, multi‐scale, 3D CNN architecture.A novel training strategy that significantly boosts performance.The first employment of a 3D fully connected CRF for post‐processing.State‐of‐the‐art performance on three challenging lesion segmentation tasks.New insights into the automatically learned intermediate representations. ABSTRACT We propose a dual pathway, 11‐layers deep, three‐dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in‐depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post‐processing of the networks soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi‐channel MRI patient data with traumatic brain injuries, brain tumours, and ischemic stroke. We improve on the state‐of‐the‐art for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly available.


Medical Image Analysis | 2017

ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI

Oskar Maier; Bjoern H. Menze; Janina von der Gablentz; Levin Häni; Mattias P. Heinrich; Matthias Liebrand; Stefan Winzeck; Abdul W. Basit; Paul Bentley; Liang Chen; Daan Christiaens; Francis Dutil; Karl Egger; Chaolu Feng; Ben Glocker; Michael Götz; Tom Haeck; Hanna Leena Halme; Mohammad Havaei; Khan M. Iftekharuddin; Pierre-Marc Jodoin; Konstantinos Kamnitsas; Elias Kellner; Antti Korvenoja; Hugo Larochelle; Christian Ledig; Jia-Hong Lee; Frederik Maes; Qaiser Mahmood; Klaus H. Maier-Hein

&NA; Ischemic stroke is the most common cerebrovascular disease, and its diagnosis, treatment, and study relies on non‐invasive imaging. Algorithms for stroke lesion segmentation from magnetic resonance imaging (MRI) volumes are intensely researched, but the reported results are largely incomparable due to different datasets and evaluation schemes. We approached this urgent problem of comparability with the Ischemic Stroke Lesion Segmentation (ISLES) challenge organized in conjunction with the MICCAI 2015 conference. In this paper we propose a common evaluation framework, describe the publicly available datasets, and present the results of the two sub‐challenges: Sub‐Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES). A total of 16 research groups participated with a wide range of state‐of‐the‐art automatic segmentation algorithms. A thorough analysis of the obtained data enables a critical evaluation of the current state‐of‐the‐art, recommendations for further developments, and the identification of remaining challenges. The segmentation of acute perfusion lesions addressed in SPES was found to be feasible. However, algorithms applied to sub‐acute lesion segmentation in SISS still lack accuracy. Overall, no algorithmic characteristic of any method was found to perform superior to the others. Instead, the characteristics of stroke lesion appearances, their evolution, and the observed challenges should be studied in detail. The annotated ISLES image datasets continue to be publicly available through an online evaluation system to serve as an ongoing benchmarking resource (www.isles‐challenge.org). HighlightsEvaluation framework for automatic stroke lesion segmentation from MRIPublic multi‐center, multi‐vendor, multi‐protocol databases releasedOngoing fair and automated benchmark with expert created ground truth setsComparison of 14+7 groups who responded to an open challenge in MICCAISegmentation feasible in acute and unsolved in sub‐acute cases Graphical abstract Figure. No caption available.


information processing in medical imaging | 2017

Unsupervised Domain Adaptation in Brain Lesion Segmentation with Adversarial Networks

Konstantinos Kamnitsas; Christian F. Baumgartner; Christian Ledig; Virginia Newcombe; Joanna P. Simpson; Andrew D. Kane; David K. Menon; Aditya V. Nori; Antonio Criminisi; Daniel Rueckert; Ben Glocker

Significant advances have been made towards building accurate automatic segmentation systems for a variety of biomedical applications using machine learning. However, the performance of these systems often degrades when they are applied on new data that differ from the training data, for example, due to variations in imaging protocols. Manually annotating new data for each test domain is not a feasible solution. In this work we investigate unsupervised domain adaptation using adversarial neural networks to train a segmentation method which is more robust to differences in the input data, and which does not require any annotations on the test domain. Specifically, we derive domain-invariant features by learning to counter an adversarial network, which attempts to classify the domain of the input data by observing the activations of the segmentation network. Furthermore, we propose a multi-connected domain discriminator for improved adversarial training. Our system is evaluated using two MR databases of subjects with traumatic brain injuries, acquired using different scanners and imaging protocols. Using our unsupervised approach, we obtain segmentation accuracies which are close to the upper bound of supervised domain adaptation.


medical image computing and computer assisted intervention | 2016

Multi-input Cardiac Image Super-Resolution Using Convolutional Neural Networks

Ozan Oktay; Wenjia Bai; Matthew C. H. Lee; Ricardo Guerrero; Konstantinos Kamnitsas; Jose Caballero; Antonio de Marvao; Stuart A. Cook; Declan P. O’Regan; Daniel Rueckert

3D cardiac MR imaging enables accurate analysis of cardiac morphology and physiology. However, due to the requirements for long acquisition and breath-hold, the clinical routine is still dominated by multi-slice 2D imaging, which hamper the visualization of anatomy and quantitative measurements as relatively thick slices are acquired. As a solution, we propose a novel image super-resolution (SR) approach that is based on a residual convolutional neural network (CNN) model. It reconstructs high resolution 3D volumes from 2D image stacks for more accurate image analysis. The proposed model allows the use of multiple input data acquired from different viewing planes for improved performance. Experimental results on 1233 cardiac short and long-axis MR image stacks show that the CNN model outperforms state-of-the-art SR methods in terms of image quality while being computationally efficient. Also, we show that image segmentation and motion tracking benefits more from SR-CNN when it is used as an initial upscaling method than conventional interpolation methods for the subsequent analysis.


IEEE Transactions on Medical Imaging | 2018

Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation

Ozan Oktay; Enzo Ferrante; Konstantinos Kamnitsas; Mattias P. Heinrich; Wenjia Bai; Jose Caballero; Stuart A. Cook; Antonio de Marvao; Timothy Dawes; Declan O'Regan; Bernhard Kainz; Ben Glocker; Daniel Rueckert

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.


IEEE Transactions on Medical Imaging | 2017

SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound

Christian F. Baumgartner; Konstantinos Kamnitsas; Jacqueline Matthew; Tara P. Fletcher; Sandra Smith; Lisa M. Koch; Bernhard Kainz; Daniel Rueckert

Identifying and interpreting fetal standard scan planes during 2-D ultrasound mid-pregnancy examinations are highly complex tasks, which require years of training. Apart from guiding the probe to the correct location, it can be equally difficult for a non-expert to identify relevant structures within the image. Automatic image processing can provide tools to help experienced as well as inexperienced operators with these tasks. In this paper, we propose a novel method based on convolutional neural networks, which can automatically detect 13 fetal standard views in freehand 2-D ultrasound data as well as provide a localization of the fetal structures via a bounding box. An important contribution is that the network learns to localize the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localization task. We present results for real-time annotation, retrospective frame retrieval from saved videos, and localization on a very large and challenging dataset consisting of images and video recordings of full clinical anomaly screenings. We found that the proposed method achieved an average F1-score of 0.798 in a realistic classification experiment modeling real-time detection, and obtained a 90.09% accuracy for retrospective frame retrieval. Moreover, an accuracy of 77.8% was achieved on the localization task.


medical image computing and computer assisted intervention | 2016

Real-Time Standard Scan Plane Detection and Localisation in Fetal Ultrasound Using Fully Convolutional Neural Networks

Christian F. Baumgartner; Konstantinos Kamnitsas; Jacqueline Matthew; Sandra Smith; Bernhard Kainz; Daniel Rueckert

Fetal mid-pregnancy scans are typically carried out according to fixed protocols. Accurate detection of abnormalities and correct biometric measurements hinge on the correct acquisition of clearly defined standard scan planes. Locating these standard planes requires a high level of expertise. However, there is a worldwide shortage of expert sonographers. In this paper, we consider a fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme. The network design allows real-time inference and can be naturally extended to provide an approximate localisation of the fetal anatomy in the image. Such a framework can be used to automate or assist with scan plane selection, or for the retrospective retrieval of scan planes from recorded videos. The method is evaluated on a large database of 1003 volunteer mid-pregnancy scans. We show that standard planes acquired in a clinical scenario are robustly detected with a precision and recall of 69 % and 80 %, which is superior to the current state-of-the-art. Furthermore, we show that it can retrospectively retrieve correct scan planes with an accuracy of 71 % for cardiac views and 81 % for non-cardiac views.


international workshop on brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries | 2016

DeepMedic for Brain Tumor Segmentation

Konstantinos Kamnitsas; Enzo Ferrante; Sarah Parisot; Christian Ledig; Aditya V. Nori; Antonio Criminisi; Daniel Rueckert; Ben Glocker

Accurate automatic algorithms for the segmentation of brain tumours have the potential of improving disease diagnosis, treatment planning, as well as enabling large-scale studies of the pathology. In this work we employ DeepMedic [1], a 3D CNN architecture previously presented for lesion segmentation, which we further improve by adding residual connections. We also present a series of experiments on the BRATS 2015 training database for evaluating the robustness of the network when less training data are available or less filters are used, aiming to shed some light on requirements for employing such a system. Our method was further benchmarked on the BRATS 2016 Challenge, where it achieved very good performance despite the simplicity of the pipeline.


IEEE Transactions on Medical Imaging | 2017

Reverse Classification Accuracy: Predicting Segmentation Performance in the Absence of Ground Truth

Vanya V. Valindria; Ioannis Lavdas; Wenjia Bai; Konstantinos Kamnitsas; Eric O. Aboagye; Andrea Rockall; Daniel Rueckert; Ben Glocker

When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies.


medical image computing and computer-assisted intervention | 2016

Fast Fully Automatic Segmentation of the Human Placenta from Motion Corrupted MRI

Amir Alansary; Konstantinos Kamnitsas; Alice Davidson; Rostislav Khlebnikov; Martin Rajchl; Christina Malamateniou; Mary A. Rutherford; Joseph V. Hajnal; Ben Glocker; Daniel Rueckert; Bernhard Kainz

Recently, magnetic resonance imaging has revealed to be important for the evaluation of placenta’s health during pregnancy. Quantitative assessment of the placenta requires a segmentation, which proves to be challenging because of the high variability of its position, orientation, shape and appearance. Moreover, image acquisition is corrupted by motion artifacts from both fetal and maternal movements. In this paper we propose a fully automatic segmentation framework of the placenta from structural T2-weighted scans of the whole uterus, as well as an extension in order to provide an intuitive pre-natal view into this vital organ. We adopt a 3D multi-scale convolutional neural network to automatically identify placental candidate pixels. The resulting classification is subsequently refined by a 3D dense conditional random field, so that a high resolution placental volume can be reconstructed from multiple overlapping stacks of slices. Our segmentation framework has been tested on 66 subjects at gestational ages 20–38 weeks achieving a Dice score of \(71.95\pm 19.79\,\%\) for healthy fetuses with a fixed scan sequence and \(66.89\pm 15.35\,\%\) for a cohort mixed with cases of intrauterine fetal growth restriction using varying scan parameters.

Collaboration


Dive into the Konstantinos Kamnitsas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Glocker

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenjia Bai

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Ozan Oktay

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge