Nigel W. John
University of Chester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nigel W. John.
IEEE Transactions on Visualization and Computer Graphics | 2018
Nigel W. John; Serban R. Pop; Thomas W. Day; Panagiotis D. Ritsos; Christopher J. Headleand
Navigating a powered wheelchair and avoiding collisions is often a daunting task for new wheelchair users. It takes time and practice to gain the coordination needed to become a competent driver and this can be even more of a challenge for someone with a disability. We present a cost-effective virtual reality (VR) application that takes advantage of consumer level VR hardware. The system can be easily deployed in an assessment centre or for home use, and does not depend on a specialized high-end virtual environment such as a Powerwall or CAVE. This paper reviews previous work that has used virtual environments technology for training tasks, particularly wheelchair simulation. We then describe the implementation of our own system and the first validation study carried out using thirty three able bodied volunteers. The study results indicate that at a significance level of 5 percent then there is an improvement in driving skills from the use of our VR system. We thus have the potential to develop the competency of a wheelchair user whilst avoiding the risks inherent to training in the real world. However, the occurrence of cybersickness is a particular problem in this application that will need to be addressed.
Teleoperators and Virtual Environments | 2015
Nigel W. John; Nicholas I. Phillips; Llyr ap Cenydd; David Coope; Nick Carleton-Bland; Ian Kamaly-Asl; William Peter Gray
The requirement for training surgical procedures without exposing the patient to additional risk is well accepted and is part of a national drive in the UK and internationally. Computer-based simulations are important in this context, including neurosurgical resident training. The objective of this study is to evaluate the effectiveness of a custom-built virtual environment in assisting training of a ventriculostomy procedure. The training tool (called VCath) has been developed as an app for a tablet platform to provide easy access and availability to trainees. The study was conducted at the first boot camp organized for all year-one trainees in neurosurgery in the UK. The attendees were randomly distributed between the VCath training group and the control group. Efficacy of performing ventriculostomy for both groups was assessed at the beginning and end of the study using a simulated insertion task. Statistically significant changes in performance of selecting the burr hole entry point, the trajectory length and duration metrics for the VCath group, together with a good indicator of improved normalized jerk (representing the speed and smoothness of arm motion), all suggest that there has been a higher-level cognitive benefit to using VCath. The app is successful as it is focused on the cognitive task of ventriculostomy, encouraging the trainee to rehearse the entry point and use anatomical landmarks to create a trajectory to the target. In straight-line trajectory procedures such as ventriculostomy, cognitive task-based education is a useful adjunct to traditional methods and may reduce the learning curve and ultimately improve patient safety.
Computer Methods and Programs in Biomedicine | 2018
Long Chen; Wen Tang; Nigel W. John; Tao Ruan Wan; Jian J. Zhang
BACKGROUND AND OBJECTIVE While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeons performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. METHODS A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. RESULTS We demonstrate the clinical relevance of our proposed system through two examples: (a) measurement of the surface; (b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24 mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54 mm, which compare favourably with previous approaches. Second, in vivo laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. CONCLUSIONS The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes.
eurographics | 2016
Marc Edwards; Serban R. Pop; Nigel W. John; Panagiotis D. Ritsos; Nick J. Avis
The Image Projection onto Patients (IPoP) system is work in progress intended to assist medical practitioners perform procedures such as biopsies, or provide a novel anatomical education tool, by projecting anatomy and other relevant information from the operating room directly onto a patients skin. This approach is not currently used widely in hospitals but has the benefit of providing effective procedure guidance without the practitioner having to look away from the patient. Developmental work towards the alpha-phase of IPoP is presented including tracking methods for tools such as biopsy needles, patient tracking, image registration and problems encountered with the multi-mirror effect.
eurographics | 2015
Panagiotis D. Ritsos; Marc R. Edwards; Iqbal S. Shergill; Nigel W. John
We present the development of a transperineal prostate biopsy, with high fidelity haptic feedback. We describe our current prototype, which is using physical props and a Geomagic Touch. In addition, we discuss a method for collecting in vitro axial needle forces, for programming haptic feedback, along with implemented an forthcoming features such as a display of 2D ultrasonic images for targeting, biopsy needle bending, prostate bleeding and calcification. Our ultimate goal is to provide an affordable high-fidelity simulation by integrating contemporary off-the-shelf technology components.
eurographics | 2015
Christopher J. Headleand; Thomas W. Day; Serban R. Pop; Panagiotis D. Ritsos; Nigel W. John
The use of electric wheelchairs is inherently risky, as collisions due to lack of control can result in injury for the user, but also potentially for other pedestrians. Introducing new users to powered chairs via virtual reality (VR) provides one possible solution, as it eliminates the risks inherent to the real world during training. However, traditionally simulator technology has been too expensive to make VR a financially viable solution. Also, current simulators lack the natural interaction possible in the real world, limiting their operational value. We present the early stages of a VR, electric wheelchair simulator built using low-cost, consumer level gaming hardware. The simulator makes use use of the the Leap Motion, to provide a level of interaction with the virtual world which has not previously been demonstrated in wheelchair training simulators. Furthermore, the Occulous Rift provides an immersive experience suitable for our training application.
Archive | 2004
Franck Patrick Vidal; Fernando Bello; Ken Brodlie; Nigel W. John; Derek A. Gould; Roger W. Phillips; Nick J. Avis
IEEE Transactions on Visualization and Computer Graphics | 2018
Min Chen; Kelly P. Gaither; Nigel W. John; Brian McCann
cyberworlds | 2015
Marc R. Edwards; Nigel W. John
Archive | 2013
Llyr ap Cenydd; Nigel W. John; Nicholas I. Phillips; William Peter Gray