Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jerome Declerck is active.

Publication


Featured researches published by Jerome Declerck.


The Journal of Nuclear Medicine | 2013

Multisoftware Reproducibility Study of Stress and Rest Myocardial Blood Flow Assessed with 3D Dynamic PET/CT and a 1-Tissue-Compartment Model of 82Rb Kinetics

Robert A. deKemp; Jerome Declerck; Ran Klein; Xiao-Bo Pan; Christine Tonge; Parthiban Arumugam; Daniel S. Berman; Guido Germano; Rob S. Beanlands; Piotr J. Slomka

Routine quantification of myocardial blood flow (MBF) requires robust and reproducible processing of dynamic image series. The goal of this study was to evaluate the reproducibility of 3 highly automated software programs commonly used for absolute MBF and flow reserve (stress/rest MBF) assessment with 82Rb PET imaging. Methods: Dynamic rest and stress 82Rb PET scans were selected in 30 sequential patient studies performed at 3 separate institutions using 3 different 3-dimensional PET/CT scanners. All 90 scans were processed with 3 different MBF quantification programs, using the same 1-tissue-compartment model. Global (left ventricle) and regional (left anterior descending, left circumflex, and right coronary arteries) MBF and flow reserve were compared among programs using correlation and Bland–Altman analyses. Results: All scans were processed successfully by the 3 programs, with minimal operator interactions. Global and regional correlations of MBF and flow reserve all had an R2 of at least 0.92. There was no significant difference in flow values at rest (P = 0.68), stress (P = 0.14), or reserve (P = 0.35) among the 3 programs. Bland–Altman coefficients of reproducibility (1.96 × SD) averaged 0.26 for MBF and 0.29 for flow reserve differences among programs. Average pairwise differences were all less than 10%, indicating good reproducibility for MBF quantification. Global and regional SD from the line of perfect agreement averaged 0.15 and 0.17 mL/min/g, respectively, for MBF, compared with 0.22 and 0.26, respectively, for flow reserve. Conclusion: The 1-tissue-compartment model of 82Rb tracer kinetics is a reproducible method for quantification of MBF and flow reserve with 3-dimensional PET/CT imaging.


medical image computing and computer assisted intervention | 2011

Automatic multi-organ segmentation using learning-based segmentation and level set optimization

Timo Kohlberger; Michal Sofka; Jingdan Zhang; Neil Birkbeck; Jens Wetzl; Jens N. Kaftan; Jerome Declerck; S. Kevin Zhou

We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.


EJNMMI research | 2011

SUVref: reducing reconstruction-dependent variation in PET SUV

Jerome Declerck

BackgroundWe propose a new methodology, reference Standardised Uptake Value (SUVref), for reducing the quantitative variation resulting from differences in reconstruction protocol. Such variation that is not directly addressed by the use of SUV or the recently proposed PERCIST can impede comparability between positron emission tomography (PET)/CT scans.MethodsSUVref applies a reconstruction-protocol-specific phantom-optimised filter to clinical PET scans for the purpose of improving comparability of quantification. The ability of this filter to reduce variability due to differences in reconstruction protocol was assessed using both phantom and clinical data.ResultsSUVref reduced the variability between recovery coefficients measured with the NEMA image quality phantom across a range of reconstruction protocols to below that measured for a single reconstruction protocol. In addition, it enabled quantitative conformance to the recently proposed EANM guidelines. For the clinical data, a significant reduction in bias and variance in the distribution of differences in SUV, resulting from differences in reconstruction protocol, greatly reduced the number of hot spots that would be misclassified as undergoing a clinically significant change in SUV.ConclusionsSUVref significantly reduces reconstruction-dependent variation in SUV measurements, enabling increased confidence in quantitative comparison of clinical images for monitoring treatment response or disease progression. This new methodology could be similarly applied to reduce variability from scanner hardware.


Surgical Endoscopy and Other Interventional Techniques | 2010

Use of the Resection Map system as guidance during hepatectomy

Pablo Lamata; Félix Lamata; Valentin Sojar; Piotr Makowski; Laurent Massoptier; Sergio Casciaro; Wajid Ali; Thomas Stüdeli; Jerome Declerck; Ole Jackov Elle; Bjørn Edwin

BackgroundThe objective of this work is to evaluate a new concept of intraoperative three-dimensional (3D) visualization system to support hepatectomy. The Resection Map aims to provide accurate cartography for surgeons, who can therefore anticipate risks, increase their confidence and achieve safer liver resection.MethodsIn an experimental prospective cohort study, ten consecutive patients admitted for hepatectomy to three European hospitals were selected. Liver structures (portal veins, hepatic veins, tumours and parenchyma) were segmented from a recent computed tomography (CT) study of each patient. The surgeon planned the resection preoperatively and read the Resection Map as reference guidance during the procedure. Objective (amount of bleeding, tumour resection margin and operating time) and subjective parameters were retrieved after each case.ResultsThree different surgeons operated on seven patients with the navigation aid of the Resection Map. Veins displayed in the Resection Map were identified during the surgical procedure in 70.1% of cases, depending mainly on size. Surgeons were able to track resection progress and experienced improved orientation and increased confidence during the procedure.ConclusionsThe Resection Map is a pragmatic solution to enhance the orientation and confidence of the surgeon. Further studies are needed to demonstrate improvement in patient safety.


NeuroImage | 2009

Robustness of multivariate image analysis assessed by resampling techniques and applied to FDG-PET scans of patients with Alzheimer's disease.

Pawel J. Markiewicz; Julian C. Matthews; Jerome Declerck; Karl Herholz

For finite and noisy samples extraction of robust features or patterns which are representative of the population is a formidable task in which over-interpretation is not uncommon. In this work, resampling techniques have been applied to a sample of 42 FDG PET brain images of 19 healthy volunteers (HVs) and 23 Alzheimers disease (AD) patients to assess the robustness of image features extracted through principal component analysis (PCA) and Fisher discriminant analysis (FDA). The objective of this work is to: 1) determine the relative variance described by the PCA to the population variance; 2) assess the robustness of the PCA to the population sample using the largest principal angle between PCA subspaces; 3) assess the robustness and accuracy of the FDA. Since the sample does not have histopathological data the impact of possible clinical misdiagnosis on the discrimination analysis is investigated. The PCA can describe up to 40% of the total population variability. Not more than the first three or four PCs can be regarded as robust on which a robust FDA can be build. Standard error images showed that regions close to the falx and around ventricles are less stable. Using the first three PCs, sensitivity and specificity were 90.5% and 96.9% respectively. The use of resampling techniques in the evaluation of the robustness of many multivariate image analysis methods enables researchers to avoid over-analysis when using these methods applied to many different neuroimaging studies often with small sample sizes.


medical image computing and computer assisted intervention | 2011

Multi-stage learning for robust lung segmentation in challenging CT volumes

Michal Sofka; Jens Wetzl; Neil Birkbeck; Jingdan Zhang; Timo Kohlberger; Jens N. Kaftan; Jerome Declerck; S. Kevin Zhou

Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.


Archive | 2010

Augmented Reality for Minimally Invasive Surgery: Overview and Some Recent Advances

Pablo Lamata; Wajid Ali; Alicia M. Cano; Jordi Cornella; Jerome Declerck; Ole Jakob Elle; Adinda Freudenthal; Hugo Furtado; Denis Kalkofen; Edvard Naerum; Eigil Samset; Patricia Sánchez-González; Francisco M. Sánchez-Margallo; Dieter Schmalstieg; Mauro Sette; Thomas Stüdeli; Jos Vander Sloten; Enrique J. Gómez

Pablo Lamata1,2, Wajid Ali3, Alicia Cano1, Jordi Cornella3, Jerome Declerck2, Ole J. Elle3, Adinda Freudenthal4, Hugo Furtado5, Denis Kalkofen6, Edvard Naerum3, Eigil Samset3, Patricia Sanchez-Gonzalez1, Francisco M. Sanchez-Margallo7, Dieter Schmalstieg6, Mauro Sette8, Thomas Studeli4, Jos Vander Sloten8 and Enrique J. Gomez1 1Universidad Politecnica de Madrid, Spain 2Siemens, United Kingdom 3University of Oslo, Norway 4Delft University of Technology, Netherlands 5Medical Centre Ljubljana, Slovenia 6 Graz University of Technology, Austria 7Minimally Invasive Surgery Centre Jesus Uson, Spain 8University of Leuven, Belgium


Bioorganic & Medicinal Chemistry | 2012

A fluorous and click approach for screening potential PET probes: Evaluation of potential hypoxia biomarkers

Romain Bejot; Laurence Carroll; Kishore Bhakoo; Jerome Declerck; Véronique Gouverneur

Radiopharmaceuticals for nuclear imaging are essentially targeting molecules, labeled with short-lived radionuclides (e.g., F-18 for PET). A significant drawback of radiopharmaceuticals development is the difficulty to access radiolabeled molecule libraries for initial in vitro evaluation, as radiolabeling has to be optimized for each individual molecule. The present paper discloses a method for preparing libraries of (18)F-labeled radiopharmaceuticals using both the fluorous-based (18)F-radiochemistry and the Huisgen 1,3-dipolar (click) conjugation reaction. As a proof of concept, this approach allowed us to obtain a series of readily accessible (18)F-radiolabeled nitroaromatic molecules, for exploring their structure-activity relationship and further in vitro evaluation of their hypoxic selectivity.


NeuroImage | 2011

Novel Fast Marching for Automated Segmentation of the Hippocampus (FMASH): method and validation on clinical data.

Courtney A. Bishop; Mark Jenkinson; Jesper Andersson; Jerome Declerck; Dorit Merhof

With hippocampal atrophy both a clinical biomarker for early Alzheimers Disease (AD) and implicated in many other neurological and psychiatric diseases, there is much interest in the accurate, reproducible delineation of this region of interest (ROI) in structural MR images. Here we present Fast Marching for Automated Segmentation of the Hippocampus (FMASH): a novel approach using the Sethian Fast Marching (FM) technique to grow a hippocampal ROI from an automatically-defined seed point. Segmentation performance is assessed on two separate clinical datasets, utilising expert manual labels as gold standard to quantify Dice coefficients, false positive rates (FPR) and false negative rates (FNR). The first clinical dataset (denoted CMA) contains normal controls (NC) and atrophied AD patients, whilst the second is a collection of NC and bipolar (BP) patients (denoted BPSA). An optimal and robust stopping criterion is established for the propagating FM front and the final FMASH segmentation estimates compared to two commonly-used methods: FIRST/FSL and Freesurfer (FS). Results show that FMASH outperforms both FIRST and FS on the BPSA data, with significantly higher Dice coefficients (0.80±0.01) and lower FPR. Despite some intrinsic bias for FIRST and FS on the CMA data, due to their training, FMASH performs comparably well on the CMA data, with an average bilateral Dice coefficient of 0.82±0.01. Furthermore, FMASH most accurately captures the hippocampal volume difference between NC and AD, and provides a more accurate estimation of the problematic hippocampus-amygdala border on both clinical datasets. The consistency in performance across the two datasets suggests that FMASH is applicable to a range of clinical data with differing image quality and demographics.


Journal of Cerebral Blood Flow and Metabolism | 2011

Optimized data preprocessing for multivariate analysis applied to 99mTc-ECD SPECT data sets of Alzheimer's patients and asymptomatic controls

Dorit Merhof; Pawel J. Markiewicz; Günther Platsch; Jerome Declerck; Markus Weih; Johannes Kornhuber; Torsten Kuwert; Julian C. Matthews; Karl Herholz

Multivariate image analysis has shown potential for classification between Alzheimers disease (AD) patients and healthy controls with a high-diagnostic performance. As image analysis of positron emission tomography (PET) and single photon emission computed tomography (SPECT) data critically depends on appropriate data preprocessing, the focus of this work is to investigate the impact of data preprocessing on the outcome of the analysis, and to identify an optimal data preprocessing method. In this work, technetium-99methylcysteinatedimer (99mTc-ECD) SPECT data sets of 28 AD patients and 28 asymptomatic controls were used for the analysis. For a series of different data preprocessing methods, which includes methods for spatial normalization, smoothing, and intensity normalization, multivariate image analysis based on principal component analysis (PCA) and Fisher discriminant analysis (FDA) was applied. Bootstrap resampling was used to investigate the robustness of the analysis and the classification accuracy, depending on the data preprocessing method. Depending on the combination of preprocessing methods, significant differences regarding the classification accuracy were observed. For 99mTc-ECD SPECT data, the optimal data preprocessing method in terms of robustness and classification accuracy is based on affine registration, smoothing with a Gaussian of 12 mm full width half maximum, and intensity normalization based on the 25% brightest voxels within the whole-brain region.

Collaboration


Dive into the Jerome Declerck's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chloe Hutton

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge