Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Crispin Schneider is active.

Publication


Featured researches published by Crispin Schneider.


Proceedings of SPIE | 2015

Accuracy validation of an image guided laparoscopy system for liver resection

Stephen A. Thompson; Johannes Totz; Yi Song; Stian Flage Johnsen; Danail Stoyanov; Sebastien Ourselin; Kurinchi Selvan Gurusamy; Crispin Schneider; Brian R. Davidson; David J. Hawkes; Matthew J. Clarkson

We present an analysis of the registration component of a proposed image guidance system for image guided liver surgery, using contrast enhanced CT. The analysis is performed on a visually realistic liver phantom and in-vivo porcine data. A robust registration process that can be deployed clinically is a key component of any image guided surgery system. It is also essential that the accuracy of the registration can be quantified and communicated to the surgeon. We summarise the proposed guidance system and discuss its clinical feasibility. The registration combines an intuitive manual alignment stage, surface reconstruction from a tracked stereo laparoscope and a rigid iterative closest point registration to register the intra-operative liver surface to the liver surface derived from CT. Testing of the system on a liver phantom shows that subsurface landmarks can be localised to an accuracy of 2.9 mm RMS. Testing during five porcine liver surgeries demonstrated that registration can be performed during surgery, with an error of less than 10 mm RMS for multiple surface landmarks.


Lasers in Surgery and Medicine | 2016

Utilizing confocal laser endomicroscopy for evaluating the adequacy of laparoscopic liver ablation

Crispin Schneider; Sp Johnson; Simon Walker-Samuel; Kurinchi Selvan Gurusamy; Matthew J. Clarkson; Stephen J. Thompson; Yi Song; Johannes Totz; Richard J. Cook; Adrien E. Desjardins; David J. Hawkes; Brian R. Davidson

Laparoscopic liver ablation therapy can be used for the treatment of primary and secondary liver malignancy. The increased incidence of cancer recurrence associated with this approach, has been attributed to the inability of monitoring the extent of ablated liver tissue.


Proceedings of SPIE | 2017

Deep residual networks for automatic segmentation of laparoscopic videos of the liver

Eli Gibson; Maria Robu; Stephen A. Thompson; Eddie Edwards; Crispin Schneider; Kurinchi Selvan Gurusamy; Brian R. Davidson; David J. Hawkes; Dean C. Barratt; Matthew J. Clarkson

Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores ≥0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.


Proceedings of SPIE | 2017

Breathing motion compensated registration of laparoscopic liver ultrasound to CT.

João Ramalhinho; Maria Robu; Stephen A. Thompson; Philip J. Edwards; Crispin Schneider; Kurinchi Selvan Gurusamy; David J. Hawkes; Brian R. Davidson; Dean C. Barratt; Matthew J. Clarkson

Laparoscopic Ultrasound (LUS) is regularly used during laparoscopic liver resection to locate critical vascular structures. Many tumours are iso-echoic, and registration to pre-operative CT or MR has been proposed as a method of image guidance. However, factors such as abdominal insufflation, LUS probe compression and breathing motion cause deformation of the liver, making this task far from trivial. Fortunately, within a smaller local region of interest a rigid solution can suffice. Also, the respiratory cycle can be expected to be consistent. Therefore, in this paper we propose a feature-based local rigid registration method to align tracked LUS data with CT while compensating for breathing motion. The method employs the Levenberg-Marquardt Iterative Closest Point (LMICP) algorithm, registers both on liver surface and vessels and requires two LUS datasets, one for registration and another for breathing estimation. Breathing compensation is achieved by fitting a 1D breathing model to the vessel points. We evaluate the algorithm by measuring the Target Registration Error (TRE) of three manually selected landmarks of a single porcine subject. Breathing compensation improves accuracy in 77% of the measurements. In the best case, TRE values below 3mm are obtained. We conclude that our method can potentially correct for breathing motion without gated acquisition of LUS and be integrated in the surgical workflow with an appropriate segmentation.


Lasers in Surgery and Medicine | 2017

Identification of liver metastases with probe-based confocal laser endomicroscopy at two excitation wavelengths.

Crispin Schneider; Sp Johnson; Kurinchi Selvan Gurusamy; Richard J. Cook; Adrien E. Desjardins; David J. Hawkes; Brian R. Davidson; Simon Walker-Samuel

Metastasis of colorectal cancer to the liver is the most common indication for hepatic resection in a western population. Incomplete excision of malignancy due to residual microscopic disease normally results in worse patient outcome. Therefore, a method aiding in the real time discrimination of normal and malignant tissue on a microscopic level would be of benefit.


computer assisted radiology and surgery | 2018

In vivo estimation of target registration errors during augmented reality laparoscopic surgery

Stephen A. Thompson; Crispin Schneider; Michele Bosi; Kurinchi Selvan Gurusamy; Sebastien Ourselin; Brian R. Davidson; David J. Hawkes; Matthew J. Clarkson

PurposeSuccessful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system.MethodsThe SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data.ResultsThe phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge.ConclusionWe present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.


Surgical Laparoscopy Endoscopy & Percutaneous Techniques | 2015

Laparoscopic manipulation of a probe-based confocal laser endomicroscope using a steerable intravascular catheter.

Crispin Schneider; Adrien E. Desjardins; Kurinchi Selvan Gurusamy; David J. Hawkes; Brian R. Davidson

Probe-based confocal laser endomicroscopy is an emerging imaging modality that enables visualization of histologic details during endoscopy and surgery. A method of guiding the probe with millimeter accuracy is required to enable imaging in all regions of the abdomen accessed during laparoscopy. On the basis of a porcine model of laparoscopic liver resection, we report our experience of using a steerable intravascular catheter to guide a probe-based confocal laser endomicroscope.


Gut | 2015

PTH-103 Evaluation of a novel system for image guided laparoscopic liver surgery in an animal model and first clinical experience

Crispin Schneider; S Thompson; J Totz; Y Song; S Johnsen; D Stoyanov; S Ourselin; Kurinchi Selvan Gurusamy; D Hawkes; M Clarkson; Brian R. Davidson

Introduction Compared to open surgery, laparoscopic liver resection (LLR) of cancer benefits patients by reducing pain, length of stay and morbidity. However, LLR is often more challenging than open surgery due to the difficulty of identifying and dividing major vascular and bile duct branches. Some of these challenges may be resolved by using image guidance systems (IGS) to overlay a 3D model of the liver structure onto the liver seen at laparoscopy. Current IGS technologies rely on manual landmark definition or ultrasound for co-registration (alignment of 3D model and in-vivo liver) and do not take organ movement and deformation into account. Image guidance for LLR using a cone beam CT has also been attempted. Our group has developed an IGS that automatically registers a liver model derived from pre-operative CT to the in-vivo liver surface using computer vision techniques. Laparoscope position in relation to the liver is determined by optical tracking. Results from a porcine study and its first application in a patient are presented here. Method Laparoscopic microwave ablation was used to create identifiable liver “lesions” and a CT was obtained in land race pigs under general anaesthesia (GA). One week later, laparoscopic left hepatectomy was performed under GA, using our system. A 46 year old female who presented with an indeterminate liver lesion in the junction of segment 5/6 underwent a hepatic wedge resection under image guidance. Data on system and surgical performance was collected in both studies. Results Experiments were conducted in 5 animals with a successful image overlay achieved in 3 cases. Failure of overlay was attributed to distorted liver anatomy secondary to adhesions formed around regions of ablated liver. Initial registration of the overlayed 3d model was accomplished in less than 10 min (min). The setup and calibration of equipment for the clinical case took 20 min. Initial registration required 3 min and similar to the animal study did not require repetition. Image overlay was successfully achieved and the operation carried out using an ultrasonic scalpel with a total procedure time of 190 min. Estimated blood loss was <150 ml and no intraoperative complications occurred. The patient had an uneventful recovery and was discharged on the 6th postoperative day. Conclusion The IGS presented here has been evaluated in both a large animal model and subsequently in a clinical scenario. The system appears to be feasible for clinical use and has benefits in regards to uninterrupted surgical workflow, lack of radiation exposure and automatic compensation for organ motion. Disclosure of interest C. Schneider Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., S. Thompson Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., J. Totz Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., Y. Song Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., S. Johnsen Grant/ Research Support from: Intelligent Imaging Programme Grant, ref: EP/H046410/1, D. Stoyanov Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., S. Ourselin Grant/ Research Support from: Intelligent Imaging Programme Grant, ref: EP/H046410/1, K. Gurusamy Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., D. Hawkes Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., M. Clarkson Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., B. Davidson Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health.


computer assisted radiology and surgery | 2016

Hand–eye calibration for rigid laparoscopes using an invariant point

Stephen A. Thompson; Danail Stoyanov; Crispin Schneider; Kurinchi Selvan Gurusamy; Sebastien Ourselin; Brian R. Davidson; David J. Hawkes; Matthew J. Clarkson


computer assisted radiology and surgery | 2015

Locally rigid, vessel-based registration for laparoscopic liver surgery

Yi Song; Johannes Totz; Sa Thompson; Stian Flage Johnsen; Dean C. Barratt; Crispin Schneider; Kurinchi Selvan Gurusamy; Brian R. Davidson; Sebastien Ourselin; David J. Hawkes; Matthew J. Clarkson

Collaboration


Dive into the Crispin Schneider's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David J. Hawkes

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johannes Totz

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Song

University College London

View shared research outputs
Top Co-Authors

Avatar

Danail Stoyanov

University College London

View shared research outputs
Top Co-Authors

Avatar

Sa Thompson

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge