J. Michael Brady
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. Michael Brady.
International Journal of Computer Vision | 1997
Stephen M. Smith; J. Michael Brady
This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction.Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new feature detectors are based on the minimization of this local image region, and the noise reduction method uses this region as the smoothing neighbourhood. The resulting methods are accurate, noise resistant and fast.Details of the new feature detectors and of the new noise reduction method are described, along with test results.
british machine vision conference | 1992
Larry S. Shapiro; J. Michael Brady
Abstract We propose a novel method for performing point-feature correspondence based on a modal shape description. Introducing shape information into a low-level matching process allows our algorithm to cope easily with rotations and translations in the image plane, yet still give a dense correspondence. We also show positive results for scale changes and small skews, and demonstrate how reflectional symmetries can be detected.
medical image computing and computer assisted intervention | 2011
Mattias P. Heinrich; Mark Jenkinson; Manav Bhushan; Tahreema N. Matin; Fergus V. Gleeson; J. Michael Brady; Julia A. Schnabel
Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this problem and proposes a new similarity metric for multi-modal registration, the non-local shape descriptor. It aims to extract the shape of anatomical features in a non-local region. By utilizing the dense evaluation of shape descriptors, this new measure bridges the gap between intensity-based and geometric feature-based similarity criteria. Our new metric allows for accurate and reliable registration of clinical multi-modal datasets and is robust against the most considerable differences between modalities, such as non-functional intensity relations, different amounts of noise and non-uniform bias fields. The measure has been implemented in a non-rigid diffusion-regularized registration framework. It has been applied to synthetic test images and challenging clinical MRI and CT chest scans. Experimental results demonstrate its advantages over the most commonly used similarity metric - mutual information, and show improved alignment of anatomical landmarks.
british machine vision conference | 1992
Larry S. Shapiro; Han Wang; J. Michael Brady
We present a robust and inherently parallel strategy for tracking “corner” features on independently moving (and possibly non-rigid) objects. The system operates over long, monocular image sequences and comprises two main parts. A matcher performs two-frame correspondence based on spatial proximity and similarity in local image structure, while a tracker maintains an image trajectory (and predictor) for every feature. The use of low-level features ensures an opportunistic and widely applicable algorithm. Moreover, the system copes with noisy data, predictor failure, and occlusion and disocclusion of scene structure. Motion and scene analysis modules can then be built onto this framework. The algorithm is aimed at applications with small inter-frame motion, such as videoconferencing.
Digital Mammography / IWDM | 1998
J. Michael Brady; Ralph Highnam
We present an approach to detecting masses in temporal and bilateral (left-right) pairs of mammograms. Masses typically appear in a mammogram as well-defined, relatively bright regions. The converse is not true; by no means all relatively bright, circumscribed regions correspond to masses. Common to our approach to matching temporal and bilateral mammogram pairs is a representation of the nested structure of “salient” bright regions in a mammogram. The representation is described in Sections 2 and 3. In the case of temporal pairs, “salient” regions are extracted independently in two mammograms of the same breast, nominally the same view at two different times. Those that either appear in the later mammogram but not in the earlier one, or which have changed significantly between the two mammograms we seek to highlight. Similarly, in the case of matching bilateral pairs, salient regions that appear in only one of same-view left and right breast mammograms of the same patient, taken at approximately the same time we seek to highlight. A fuller version of this article appears as Chapter 13 of [4].
medical image computing and computer assisted intervention | 2011
Manav Bhushan; Julia A. Schnabel; Laurent Risser; Mattias P. Heinrich; J. Michael Brady; Mark Jenkinson
We present a novel Bayesian framework for non-rigid motion correction and pharmacokinetic parameter estimation in dceMRI sequences which incorporates a physiological image formation model into the similarity measure used for motion correction. The similarity measure is based on the maximization of the joint posterior probability of the transformations which need to be applied to each image in the dataset to bring all images into alignment, and the physiological parameters which best explain the data. The deformation framework used to deform each image is based on the diffeomorphic logDemons algorithm. We then use this method to co-register images from simulated and real dceMRI datasets and show that the method leads to an improvement in the estimation of physiological parameters as well as improved alignment of the images.
british machine vision conference | 1991
Larry S. Shapiro; J. Michael Brady
We propose a novel method for performing point-feature correspondence based on a modal shape description. Introducing shape information into a low-level matching process allows our algorithm to cope easily with rotations and translations in the image plane, yet still give a dense correspondence. We also show positive results for scale changes and small skews, and demonstrate how reflectional symmetries can be detected.
MEDICAL IMAGING 2000: IMAGE PROCESSING, PTS 1 AND 2 | 2000
Yongyue Zhang; J. Michael Brady; Stephen M. Smith
The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain MR images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation -- no spatial information is taken into account. This causes the FM model to work only on well-defined images with low noise level. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a Markov random field whose state sequence cannot be observed directly but which can be observed through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. To fit the HMRF model, an expectation-maximization (EM) algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into an HMRF-EM framework, an accurate and robust segmentation can be achieved, which is demonstrated by comparison experiments with the FM model-based segmentation.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
medical image computing and computer assisted intervention | 2007
Weiwei Zhang; J. Alison Noble; J. Michael Brady
We extend our static multimodal nonrigid registration to a spatio-temporal (2D+T) co-registration of a real-time 3D ultrasound and a cardiovascular MR sequence. The motivation for our research is to assist a clinician to automatically fuse the information from multiple imaging modalities for the early diagnosis and therapy of cardiac disease. The deformation field between both sequences is decoupled into spatial and temporal components. Temporal alignment is firstly performed to re-slice both sequences using a differential registration method. Spatial alignment is then carried out between the frames corresponding to the same temporal position. The spatial deformation is modeled by the polyaffine transformation whose anchor points (or control points) are automatically detected and refined by calculating a local mis-match measure based on phase mutual information. The spatial alignment is built in an adaptive multi-scale framework to maximize the phase-based similarity measure by optimizing the parameters of the polyaffine transformation. Results demonstrate that this novel method can yield an accurate registration to particular cardiac regions.
Image and Vision Computing | 1994
Roger Fawcett; Andrew Zisserman; J. Michael Brady
Abstract We demonstrate that the structure of a 3D point set with a single bilateral symmetry can be reconstructed from an uncalibrated affine image, modulo a Euclidean transformation, up to a four parameter family of symmetric objects that could have given rise to the image. If the object has two orthogonal bilateral symmetries, its shape can be reconstructed, modulo a Euclidean transformation, to a three parameter family of symmetric shapes that could have given rise to the image. Furthermore, if the camera aspects ratio is known, the three parameter family reduces to a single scale and the orientation of the object can be determined. These results are demonstrated using real images with uncalibrated cameras.