Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard M. Leahy is active.

Publication


Featured researches published by Richard M. Leahy.


IEEE Signal Processing Magazine | 2001

Electromagnetic brain mapping

Sylvain Baillet; John C. Mosher; Richard M. Leahy

There has been tremendous advances in our ability to produce images of human brain function. Applications of functional brain imaging extend from improving our understanding of the basic mechanisms of cognitive processes to better characterization of pathologies that impair normal function. Magnetoencephalography (MEG) and electroencephalography (EEG) (MEG/EEG) localize neural electrical activity using noninvasive measurements of external electromagnetic signals. Among the available functional imaging techniques, MEG and EEG uniquely have temporal resolutions below 100 ms. This temporal precision allows us to explore the timing of basic neural processes at the level of cell assemblies. MEG/EEG source localization draws on a wide range of signal processing techniques including digital filtering, three-dimensional image analysis, array signal processing, image modeling and reconstruction, and, blind source separation and phase synchrony estimation. We describe the underlying models currently used in MEG/EEG source estimation and describe the various signal processing steps required to compute these sources. In particular we describe methods for computing the forward fields for known source distributions and parametric and imaging-based approaches to the inverse problem.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1993

An optimal graph theoretic approach to data clustering: theory and its application to image segmentation

Zhenyu Wu; Richard M. Leahy

A novel graph theoretic approach for data clustering is presented and its application to the image segmentation problem is demonstrated. The data to be clustered are represented by an undirected adjacency graph G with arc capacities assigned to reflect the similarity between the linked vertices. Clustering is achieved by removing arcs of G to form mutually exclusive subgraphs such that the largest inter-subgraph maximum flow is minimized. For graphs of moderate size ( approximately 2000 vertices), the optimal solution is obtained through partitioning a flow and cut equivalent tree of G, which can be efficiently constructed using the Gomory-Hu algorithm (1961). However for larger graphs this approach is impractical. New theorems for subgraph condensation are derived and are then used to develop a fast algorithm which hierarchically constructs and partitions a partially equivalent tree of much reduced size. This algorithm results in an optimal solution equivalent to that obtained by partitioning the complete equivalent tree and is able to handle very large graphs with several hundred thousand vertices. The new clustering algorithm is applied to the image segmentation problem. The segmentation is achieved by effectively searching for closed contours of edge elements (equivalent to minimum cuts in G), which consist mostly of strong edges, while rejecting contours containing isolated strong edges. This method is able to accurately locate region boundaries and at the same time guarantees the formation of closed edge contours. >


IEEE Transactions on Biomedical Engineering | 1992

Multiple dipole modeling and localization from spatio-temporal MEG data

John C. Mosher; Paul S. Lewis; Richard M. Leahy

The authors present general descriptive models for spatiotemporal MEG (magnetoencephalogram) data and show the separability of the linear moment parameters and nonlinear location parameters in the MEG problem. A forward model with current dipoles in a spherically symmetric conductor is used as an example: however, other more advanced MEG models, as well as many EEG (electroencephalogram) models, can also be formulated in a similar linear algebra framework. A subspace methodology and computational approach to solving the conventional least-squares problem is presented. A new scanning approach, equivalent to the statistical MUSIC method, is also developed. This subspace method scans three-dimensional space with a one-dipole model, making it computationally feasible to scan the complete head volume.<<ETX>>


NeuroImage | 2001

Magnetic Resonance Image Tissue Classification Using a Partial Volume Model

David W. Shattuck; Stephanie R. Sandor-Leahy; Kirt A. Schaper; David A. Rottenberg; Richard M. Leahy

We describe a sequence of low-level operations to isolate and classify brain tissue within T1-weighted magnetic resonance images (MRI). Our method first removes nonbrain tissue using a combination of anisotropic diffusion filtering, edge detection, and mathematical morphology. We compensate for image nonuniformities due to magnetic field inhomogeneities by fitting a tricubic B-spline gain field to local estimates of the image nonuniformity spaced throughout the MRI volume. The local estimates are computed by fitting a partial volume tissue measurement model to histograms of neighborhoods about each estimate point. The measurement model uses mean tissue intensity and noise variance values computed from the global image and a multiplicative bias parameter that is estimated for each region during the histogram fit. Voxels in the intensity-normalized image are then classified into six tissue types using a maximum a posteriori classifier. This classifier combines the partial volume tissue measurement model with a Gibbs prior that models the spatial properties of the brain. We validate each stage of our algorithm on real and phantom data. Using data from the 20 normal MRI brain data sets of the Internet Brain Segmentation Repository, our method achieved average kappa indices of kappa = 0.746 +/- 0.114 for gray matter (GM) and kappa = 0.798 +/- 0.089 for white matter (WM) compared to expert labeled data. Our method achieved average kappa indices kappa = 0.893 +/- 0.041 for GM and kappa = 0.928 +/- 0.039 for WM compared to the ground truth labeling on 12 volumes from the Montreal Neurological Institutes BrainWeb phantom.


Computational Intelligence and Neuroscience | 2011

Brainstorm: a user-friendly application for MEG/EEG analysis

François Tadel; Sylvain Baillet; John C. Mosher; Dimitrios Pantazis; Richard M. Leahy

Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI).


Physics in Medicine and Biology | 1998

High-resolution 3D Bayesian image reconstruction using the microPET small-animal scanner

Jinyi Qi; Richard M. Leahy; Simon R. Cherry; Arion F. Chatziioannou; Thomas H. Farquhar

A Bayesian method is described for reconstruction of high-resolution 3D images from the microPET small-animal scanner. Resolution recovery is achieved by explicitly modelling the depth dependent geometric sensitivity for each voxel in combination with an accurate detector response model that includes factors due to photon pair non-collinearity and inter-crystal scatter and penetration. To reduce storage and computational costs we use a factored matrix in which the detector response is modelled using a sinogram blurring kernel. Maximum a posteriori (MAP) images are reconstructed using this model in combination with a Poisson likelihood function and a Gibbs prior on the image. Reconstructions obtained from point source data using the accurate system model demonstrate a potential for near-isotropic FWHM resolution of approximately 1.2 mm at the center of the field of view compared with approximately 2 mm when using an analytic 3D reprojection (3DRP) method with a ramp filter. These results also show the ability of the accurate system model to compensate for resolution loss due to crystal penetration producing nearly constant radial FWHM resolution of 1 mm out to a 4 mm radius. Studies with a point source in a uniform cylinder indicate that as the resolution of the image is reduced to control noise propagation the resolution obtained using the accurate system model is superior to that obtained using 3DRP at matched background noise levels. Additional studies using pie phantoms with hot and cold cylinders of diameter 1-2.5 mm and 18FDG animal studies appear to confirm this observation.


IEEE Transactions on Medical Imaging | 1989

A generalized EM algorithm for 3-D Bayesian reconstruction from Poisson data using Gibbs priors

T. Hebert; Richard M. Leahy

A generalized expectation-maximization (GEM) algorithm is developed for Bayesian reconstruction, based on locally correlated Markov random-field priors in the form of Gibbs functions and on the Poisson data model. For the M-step of the algorithm, a form of coordinate gradient ascent is derived. The algorithm reduces to the EM maximum-likelihood algorithm as the Markov random-field prior tends towards a uniform distribution. Three different Gibbs function priors are examined. Reconstructions of 3-D images obtained from the Poisson model of single-photon-emission computed tomography are presented.


Medical Image Analysis | 2002

BrainSuite: an automated cortical surface identification tool.

David W. Shattuck; Richard M. Leahy

We describe a new magnetic resonance (MR) image analysis tool that produces cortical surface representations with spherical topology from MR images of the human brain. The tool provides a sequence of low-level operations in a single package that can produce accurate brain segmentations in clinical time. The tools include skull and scalp removal, image nonuniformity compensation, voxel-based tissue classification, topological correction, rendering, and editing functions. The collection of tools is designed to require minimal user interaction to produce cortical representations. In this paper we describe the theory of each stage of the cortical surface identification process. We then present classification validation results using real and phantom data. We also present a study of interoperator variability.


Physics in Medicine and Biology | 2007

Digimouse: a 3D whole body mouse atlas from CT and cryosection data

Belma Dogdas; David Stout; Arion F. Chatziioannou; Richard M. Leahy

We have constructed a three-dimensional (3D) whole body mouse atlas from coregistered x-ray CT and cryosection data of a normal nude male mouse. High quality PET, x-ray CT and cryosection images were acquired post mortem from a single mouse placed in a stereotactic frame with fiducial markers visible in all three modalities. The image data were coregistered to a common coordinate system using the fiducials and resampled to an isotropic 0.1 mm voxel size. Using interactive editing tools we segmented and labelled whole brain, cerebrum, cerebellum, olfactory bulbs, striatum, medulla, masseter muscles, eyes, lachrymal glands, heart, lungs, liver, stomach, spleen, pancreas, adrenal glands, kidneys, testes, bladder, skeleton and skin surface. The final atlas consists of the 3D volume, in which the voxels are labelled to define the anatomical structures listed above, with coregistered PET, x-ray CT and cryosection images. To illustrate use of the atlas we include simulations of 3D bioluminescence and PET image reconstruction. Optical scatter and absorption values are assigned to each organ to simulate realistic photon transport within the animal for bioluminescence imaging. Similarly, 511 keV photon attenuation values are assigned to each structure in the atlas to simulate realistic photon attenuation in PET. The Digimouse atlas and data are available at http://neuroimage.usc.edu/Digimouse.html.


Physics in Medicine and Biology | 1999

A sensor-weighted overlapping-sphere head model and exhaustive head model comparison for MEG.

Mingxiong Huang; John C. Mosher; Richard M. Leahy

The spherical head model has been used in magnetoencephalography (MEG) as a simple forward model for calculating the external magnetic fields resulting from neural activity. For more realistic head shapes, the boundary element method (BEM) or similar numerical methods are used, but at greatly increased computational cost. We introduce a sensor-weighted overlapping-sphere (OS) head model for rapid calculation of more realistic head shapes. The volume currents associated with primary neural activity are used to fit spherical head models for each individual MEG sensor such that the head is more realistically modelled as a set of overlapping spheres, rather than a single sphere. To assist in the evaluation of this OS model with BEM and other head models, we also introduce a novel comparison technique that is based on a generalized eigenvalue decomposition and accounts for the presence of noise in the MEG data. With this technique we can examine the worst possible errors for thousands of dipole locations in a realistic brain volume. We test the traditional single-sphere model, three-shell and single-shell BEM, and the new OS model. The results show that the OS model has accuracy similar to the BEM but is orders of magnitude faster to compute.

Collaboration


Dive into the Richard M. Leahy's collaboration.

Top Co-Authors

Avatar

Anand A. Joshi

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dimitrios Pantazis

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sangtae Ahn

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Justin P. Haldar

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sylvain Baillet

Montreal Neurological Institute and Hospital

View shared research outputs
Top Co-Authors

Avatar

Jinyi Qi

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge