Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Jonathan McLeod is active.

Publication


Featured researches published by A. Jonathan McLeod.


Medical Image Analysis | 2016

Hierarchical max-flow segmentation framework for multi-atlas segmentation with Kohonen self-organizing map based Gaussian mixture modeling

Martin Rajchl; John S. H. Baxter; A. Jonathan McLeod; Jing Yuan; Wu Qiu; Terry M. Peters; Ali R. Khan

The incorporation of intensity, spatial, and topological information into large-scale multi-region segmentation has been a topic of ongoing research in medical image analysis. Multi-region segmentation problems, such as segmentation of brain structures, pose unique challenges in image segmentation in which regions may not have a defined intensity, spatial, or topological distinction, but rely on a combination of the three. We propose a novel framework within the Advanced segmentation tools (ASETS)(2), which combines large-scale Gaussian mixture models trained via Kohonen self-organizing maps, with deformable registration, and a convex max-flow optimization algorithm incorporating region topology as a hierarchy or tree. Our framework is validated on two publicly available neuroimaging datasets, the OASIS and MRBrainS13 databases, against the more conventional Potts model, achieving more accurate segmentations. Each component is accelerated using general-purpose programming on graphics processing Units to ensure computational feasibility.


medical image computing and computer assisted intervention | 2013

Robust Intraoperative US Probe Tracking Using a Monocular Endoscopic Camera

Uditha L. Jayarathne; A. Jonathan McLeod; Terry M. Peters; Elvis C. S. Chen

In the context of minimally-invasive procedures involving both endoscopic video and ultrasound, we present a vision-based method to track the ultrasound probe using a standard monocular video laparoscopic instrument. This approach requires only cosmetic modification to the ultrasound probe and obviates the need for magnetic tracking of either instrument. We describe an Extended Kalman Filter framework that solves for both the feature correspondence and pose estimation, and is able to track a 3D pattern on the surface of the ultrasound probe in near real-time. The tracking capability is demonstrated by performing an ultrasound calibration of a visually-tracked ultrasound probe, using a standard endoscopic video camera. Ultrasound calibration resulted in a mean TRE of 2.3 mm, and comparison with an external optical tracker demonstrated a mean FRE of 4.4 mm between the two tracking systems.


Magnetic Resonance Imaging | 2014

Stationary wavelet transform for under-sampled MRI reconstruction.

Mohammad H. Kayvanrad; A. Jonathan McLeod; John S. H. Baxter; Charles A. McKenzie; Terry M. Peters

In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions.


computer assisted radiology and surgery | 2015

Registration of 3D shapes under anisotropic scaling

Elvis C. S. Chen; A. Jonathan McLeod; John S. H. Baxter; Terry M. Peters

PurposeSeveral medical imaging modalities exhibit inherent scaling among the acquired data: The scale in an ultrasound image varies with the speed of sound and the scale of the range data used to reconstruct organ surfaces is subject to the scanner distance. In the context of surface-based registration, these scaling factors are often assumed to be isotropic, or as a known prior. Accounting for such anisotropies in scale can potentially dramatically improve registration and calibrations procedures that are essential for robust image-guided interventions.MethodsWe introduce an extension to the ordinary iterative closest point (ICP) algorithm, solving for the similarity transformation between point-sets comprising anisotropic scaling followed by rotation and translation. The proposed anisotropic-scaled ICP (ASICP) incorporate a novel use of Mahalanobis distance to establish correspondence and a new solution for the underlying registration problem. The derivation and convergence properties of ASICP are presented, and practical implementation details are discussed. Because the ASICP algorithm is independent of shape representation and feature extraction, it is generalizable for registrations involving scaling.ResultsExperimental results involving the ultrasound calibration, registration of partially overlapping range data, whole surfaces, as well as multi-modality surface data (intraoperative ultrasound to preoperative MR) show dramatic improvement in fiducial registration error.ConclusionWe present a generalization of the ICP algorithm, solving for a similarity transform between two point-sets by means of anisotropic scales, followed by rotation and translation. Our anisotropic-scaled ICP algorithm shares many traits with the ordinary ICP, including guaranteed convergence, independence of shape representation, and general applicability.


Proceedings of SPIE | 2014

Motion magnification for endoscopic surgery

A. Jonathan McLeod; John S. H. Baxter; Sandrine de Ribaupierre; Terry M. Peters

Endoscopic and laparoscopic surgeries are used for many minimally invasive procedures but limit the visual and haptic feedback available to the surgeon. This can make vessel sparing procedures particularly challenging to perform. Previous approaches have focused on hardware intensive intraoperative imaging or augmented reality systems that are difficult to integrate into the operating room. This paper presents a simple approach in which motion is visually enhanced in the endoscopic video to reveal pulsating arteries. This is accomplished by amplifying subtle, periodic changes in intensity coinciding with the patient’s pulse. This method is then applied to two procedures to illustrate its potential. The first, endoscopic third ventriculostomy, is a neurosurgical procedure where the floor of the third ventricle must be fenestrated without injury to the basilar artery. The second, nerve-sparing robotic prostatectomy, involves removing the prostate while limiting damage to the neurovascular bundles. In both procedures, motion magnification can enhance subtle pulsation in these structures to aid in identifying and avoiding them.


Proceedings of SPIE | 2012

Feature identification for image-guided transcatheter aortic valve implantation

Pencilla Lang; Martin Rajchl; A. Jonathan McLeod; Michael W.A. Chu; Terry M. Peters

Transcatheter aortic valve implantation (TAVI) is a less invasive alternative to open-heart surgery, and is critically dependent on imaging for accurate placement of the new valve. Augmented image-guidance for TAVI can be provided by registering together intra-operative transesophageal echo (TEE) ultrasound and a model derived from pre-operative CT. Automatic contour delineation on TEE images of the aortic root is required for real-time registration. This study develops an algorithm to automatically extract contours on simultaneous cross-plane short-axis and long-axis (XPlane) TEE views, and register these features to a 3D pre-operative model. A continuous max-flow approach is used to segment the aortic root, followed by analysis of curvature to select appropriate contours for use in registration. Results demonstrate a mean contour boundary distance error of 1.3 and 2.8mm for the short and long-axis views respectively, and a mean target registration error of 5.9mm. Real-time image guidance has the potential to increase accuracy and reduce complications in TAVI.


computer assisted radiology and surgery | 2015

Detection and visualization of dural pulsation for spine needle interventions

A. Jonathan McLeod; John S. H. Baxter; Golafsoun Ameri; Sugantha Ganapathy; Terry M. Peters; Elvis C. S. Chen

PurposeEpidural and spinal anesthesia are common procedures that require a needle to be inserted into the patient’s spine to deliver an anesthetic. Traditionally, these procedures were performed without image guidance, using only palpation to identify the correct vertebral interspace. More recently, ultrasound has seen widespread use in guiding spinal needle interventions. Dural pulsation is a valuable cue for finding a path through the vertebral interspace and for determining needle insertion depth. However, dural pulsation is challenging to detect and not perceptible in many cases. Here, a method for automatically detecting very subtle dural pulsation from live ultrasound video is presented.MethodsA periodic model is fit to the B-mode intenstity values through extended Kalman filtering. The fitted frequencies and amplitudes are used to detect and visualize dural pulsation. The method is validated retrospectively on synthetic and human video and used in real time on an interventional spinal phantom.ResultsThis method was capable of quickly identifying subtle dural pulsation and was robust to background noise and motion. The pulsation visualization reduced both the normalized path length and number of attempts required in a mock epidural procedure.ConclusionThis technique is able to localize the dura and help find a clear needle trajectory to the epidural space. It can be run in real time on commercial ultrasound systems and has the potential to improve ultrasound guidance of spine needle interventions.


International Journal of Computer Vision | 2017

Directed Acyclic Graph Continuous Max-Flow Image Segmentation for Unconstrained Label Orderings

John S. H. Baxter; Martin Rajchl; A. Jonathan McLeod; Jing Yuan; Terry M. Peters

Label ordering, the specification of subset–superset relationships for segmentation labels, has been of increasing interest in image segmentation as they allow for complex regions to be represented as a collection of simple parts. Recent advances in continuous max-flow segmentation have widely expanded the possible label orderings from binary background/foreground problems to extendable frameworks in which the label ordering can be specified. This article presents Directed Acyclic Graph Max-Flow image segmentation which is flexible enough to incorporate any label ordering without constraints. This framework uses augmented Lagrangian multipliers and primal–dual optimization to develop a highly parallelized solver implemented using GPGPU. This framework is validated on synthetic, natural, and medical images illustrating its general applicability.


Proceedings of SPIE | 2015

Dynamic heart phantom with functional mitral and aortic valves

Claire Vannelli; John Moore; A. Jonathan McLeod; Dennis Ceh; Terry M. Peters

Cardiac valvular stenosis, prolapse and regurgitation are increasingly common conditions, particularly in an elderly population with limited potential for on-pump cardiac surgery. NeoChord©, MitraClip© and numerous stent-based transcatheter aortic valve implantation (TAVI) devices provide an alternative to intrusive cardiac operations; performed while the heart is beating, these procedures require surgeons and cardiologists to learn new image-guidance based techniques. Developing these visual aids and protocols is a challenging task that benefits from sophisticated simulators. Existing models lack features needed to simulate off-pump valvular procedures: functional, dynamic valves, apical and vascular access, and user flexibility for different activation patterns such as variable heart rates and rapid pacing. We present a left ventricle phantom with these characteristics. The phantom can be used to simulate valvular repair and replacement procedures with magnetic tracking, augmented reality, fluoroscopy and ultrasound guidance. This tool serves as a platform to develop image-guidance and image processing techniques required for a range of minimally invasive cardiac interventions. The phantom mimics in vivo mitral and aortic valve motion, permitting realistic ultrasound images of these components to be acquired. It also has a physiological realistic left ventricular ejection fraction of 50%. Given its realistic imaging properties and non-biodegradable composition—silicone for tissue, water for blood—the system promises to reduce the number of animal trials required to develop image guidance applications for valvular repair and replacement. The phantom has been used in validation studies for both TAVI image-guidance techniques1, and image-based mitral valve tracking algorithms2.


Computerized Medical Imaging and Graphics | 2016

Phantom study of an ultrasound guidance system for transcatheter aortic valve implantation

A. Jonathan McLeod; Maria E. Currie; John Moore; Daniel Bainbridge; Bob Kiaii; Michael W.A. Chu; Terry M. Peters

A guidance system using transesophageal echocardiography and magnetic tracking is presented which avoids the use of nephrotoxic contrast agents and ionizing radiation required for traditional fluoroscopically guided procedures. The aortic valve is identified in tracked biplane transesophageal echocardiography and used to guide stent deployment in a mixed reality environment. Additionally, a transapical delivery tool with intracardiac echocardiography capable of monitoring stent deployment was created. This system resulted in a deployment depth error of 3.4mm in a phantom. This was further improved to 2.3mm with the custom-made delivery tool. In comparison, the variability in deployment depth for traditional fluoroscopic guidance was estimated at 3.4mm.

Collaboration


Dive into the A. Jonathan McLeod's collaboration.

Top Co-Authors

Avatar

Terry M. Peters

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

John S. H. Baxter

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Elvis C. S. Chen

Robarts Research Institute

View shared research outputs
Top Co-Authors

Avatar

Uditha L. Jayarathne

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Moore

Robarts Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Golafsoun Ameri

Robarts Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jing Yuan

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Stephen E. Pautler

University of Western Ontario

View shared research outputs
Researchain Logo
Decentralizing Knowledge