Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Toews is active.

Publication


Featured researches published by Matthew Toews.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Detection, Localization, and Sex Classification of Faces from Arbitrary Viewpoints and under Occlusion

Matthew Toews; Tal Arbel

This paper presents a novel framework for detecting, localizing, and classifying faces in terms of visual traits, e.g., sex or age, from arbitrary viewpoints and in the presence of occlusion. All three tasks are embedded in a general viewpoint-invariant model of object class appearance derived from local scale-invariant features, where features are probabilistically quantified in terms of their occurrence, appearance, geometry, and association with visual traits of interest. An appearance model is first learned for the object class, after which a Bayesian classifier is trained to identify the model features indicative of visual traits. The framework can be applied in realistic scenarios in the presence of viewpoint changes and partial occlusion, unlike other techniques assuming data that are single viewpoint, upright, prealigned, and cropped from background distraction. Experimentation establishes the first result for sex classification from arbitrary viewpoints, an equal error rate of 16.3 percent, based on the color FERET database. The method is also shown to work robustly on faces in cluttered imagery from the CMU profile database. A comparison with the geometry-free bag-of-words model shows that geometrical information provided by our framework improves classification. A comparison with support vector machines demonstrates that Bayesian classification results in superior performance.


NeuroImage | 2010

Feature-Based Morphometry: Discovering Group-related Anatomical Patterns

Matthew Toews; William M. Wells; D. Louis Collins; Tal Arbel

This paper presents feature-based morphometry (FBM), a new fully data-driven technique for discovering patterns of group-related anatomical structure in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between subjects, FBM explicitly aims to identify distinctive anatomical patterns that may only be present in subsets of subjects, due to disease or anatomical variability. The image is modeled as a collage of generic, localized image features that need not be present in all subjects. Scale-space theory is applied to analyze image features at the characteristic scale of underlying anatomical structures, instead of at arbitrary scales such as global or voxel-level. A probabilistic model describes features in terms of their appearance, geometry, and relationship to subject groups, and is automatically learned from a set of subject images and group labels. Features resulting from learning correspond to group-related anatomical structures that can potentially be used as image biomarkers of disease or as a basis for computer-aided diagnosis. The relationship between features and groups is quantified by the likelihood of feature occurrence within a specific group vs. the rest of the population, and feature significance is quantified in terms of the false discovery rate. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimers (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and an equal error classification rate of 0.80 is achieved for subjects aged 60-80 years exhibiting mild AD (CDR=1).


Medical Image Analysis | 2013

Efficient and robust model-to-image alignment using 3D scale-invariant features

Matthew Toews; William M. Wells

This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down.


computer vision and pattern recognition | 2009

SIFT-Rank: Ordinal description for invariant feature correspondence

Matthew Toews; William M. Wells

This paper investigates ordinal image description for invariant feature correspondence. Ordinal description is a meta-technique which considers image measurements in terms of their ranks in a sorted array, instead of the measurement values themselves. Rank-ordering normalizes descriptors in a manner invariant under monotonic deformations of the underlying image measurements, and therefore serves as a simple, non-parametric substitute for ad hoc scaling and thresholding techniques currently used. Ordinal description is particularly well-suited for invariant features, as the high dimensionality of state-of-the-art descriptors permits a large number of unique rank-orderings, and the computationally complex step of sorting is only required once after geometrical normalization. Correspondence trials based on a benchmark data set show that in general, rank-ordered SIFT (SIFT-rank) descriptors outperform other state-of-the-art descriptors in terms of precision-recall, including standard SIFT and GLOH.


computer vision and pattern recognition | 2010

Gender classification from unconstrained video sequences

Meltem Demirkus; Matthew Toews; James J. Clark; Tal Arbel

This paper presents the first investigation into the classification of faces from unconstrained video sequences in natural scenes, i.e., with arbitrary poses, facial expressions, occlusions, illumination conditions and motion blur. To overcome difficulties from individual frames, a novel Bayesian formulation is proposed to estimate the posterior probability of a face trait at a specific time, conditional on features identified in previous frames of a video sequence. A Markov model is used to represent temporal dependencies, and classification involves determining the maximum a posteriori class at a given time. Showing the robustness of the proposed system, the Bayesian framework is first trained on a database collected under controlled conditions, and then applied to the previously unseen faces obtained from an unconstrained video database. The Markovian temporal model results in a gender classification rate of 90% by the last video frame, and is shown to outperform alternative approaches previously introduced in the literature.


IEEE Transactions on Medical Imaging | 2007

A Statistical Parts-Based Model of Anatomical Variability

Matthew Toews; Tal Arbel

In this paper, we present a statistical parts-based model (PBM) of appearance, applied to the problem of modeling intersubject anatomical variability in magnetic resonance (MR) brain images. In contrast to global image models such as the active appearance model (AAM), the PBM consists of a collection of localized image regions, referred to as parts, whose appearance, geometry and occurrence frequency are quantified statistically. The parts-based approach explicitly addresses the case where one-to-one correspondence does not exist between all subjects in a population due to anatomical differences, as model parts are not required to appear in all subjects. The model is constructed through a fully automatic machine learning algorithm, identifying image patterns that appear with statistical regularity in a large collection of subject images. Parts are represented by generic scale-invariant features, and the model can, therefore, be applied to a wide variety of image domains. Experimentation based on 2-D MR slices shows that a PBM learned from a set of 102 subjects can be robustly fit to 50 new subjects with accuracy comparable to 3 human raters. Additionally, it is shown that unlike global models such as the AAM, PBM fitting is stable in the presence of unexpected, local perturbation


International Journal of Biomedical Imaging | 2014

Robust initialization of active shape models for lung segmentation in CT scans: a feature-based atlas approach

Gurman Gill; Matthew Toews; Reinhard Beichel

Model-based segmentation methods have the advantage of incorporating a priori shape information into the segmentation process but suffer from the drawback that the model must be initialized sufficiently close to the target. We propose a novel approach for initializing an active shape model (ASM) and apply it to 3D lung segmentation in CT scans. Our method constructs an atlas consisting of a set of representative lung features and an average lung shape. The ASM pose parameters are found by transforming the average lung shape based on an affine transform computed from matching features between the new image and representative lung features. Our evaluation on a diverse set of 190 images showed an average dice coefficient of 0.746 ± 0.068 for initialization and 0.974 ± 0.017 for subsequent segmentation, based on an independent reference standard. The mean absolute surface distance error was 0.948 ± 1.537 mm. The initialization as well as segmentation results showed a statistically significant improvement compared to four other approaches. The proposed initialization method can be generalized to other applications employing ASM-based segmentation.


international conference on pattern recognition | 2006

Detection Over Viewpoint via the Object Class Invariant

Matthew Toews; Tal Arbel

In this article, we present a new model of object class appearance over viewpoint, based on learning a relationship between scale-invariant image features (e.g. SIFT) and a geometric structure that we refer to as an OCI (object class invariant). The OCI is a perspective invariant defined across instances of an object class, and thereby serves as a common reference frame relating features over viewpoint change and object class. A single probabilistic OCI model can be learned to capture the rich multimodal nature of object class appearance in the presence of viewpoint change, providing an efficient alternative to the popular approach of training a battery of detectors at separate viewpoints and/or poses. Experimentation demonstrates that an OCI model of faces can be learned from a small number of natural, cluttered images, and used to detect faces exhibiting a large degree of appearance variation due to viewpoint change and intra-class variability (i.e. (sun)glasses, ethnicity, expression, etc.)


international conference on pattern recognition | 2006

Fundamental Matrix Estimation via TIP - Transfer of Invariant Parameters

Frank Riggi; Matthew Toews; Tal Arbel

The fundamental matrix (FM) represents the perspective transform between two or more uncalibrated images of a stationary scene, and is traditionally estimated based on 2-parameter point-to-point correspondences between image pairs. Recent invariant correspondence techniques however, provide robust correspondences in terms of 4 to 6-parameter invariant regions. Such correspondences contain important information regarding scene geometry, information which is lost in FM estimation techniques based solely on 2-parameter point translation. In this article, we present a method of incorporating this additional information into point-based FM estimation routines, entitled TIP (transfer of invariant parameters). The TIP method transforms invariant correspondence parameters into additional point correspondences, which can be used with FM estimation routines. Experimentation shows that the TIP methods result in more robust FM estimates in the case of sparse correspondence, and allows estimation based on as few as 3 correspondences in the case of affine-invariant features


medical image computing and computer assisted intervention | 2005

Maximum a posteriori local histogram estimation for image registration

Matthew Toews; D. Louis Collins; Tal Arbel

Image similarity measures for registration can be considered within the general context of joint intensity histograms, which consist of bin count parameters estimated from image intensity samples. Many approaches to estimation are ML (maximum likelihood), which tends to be unstable in the presence sparse data, resulting in registration that is driven by spurious noisy matches instead of valid intensity relationships. We propose instead a method of MAP (maximum a posteriori) estimation, which is well-defined for sparse data, or even in the absence of data. This estimator can incorporate a variety of prior assumptions, such as global histogram characteristics, or use a maximum entropy prior when no such assumptions exist. We apply our estimation method to deformable registration of MR (magnetic resonance) and US (ultrasound) images for an IGNS (image-guided guided neurosurgery) application, where our MAP estimation method results in more stable and accurate registration than a traditional ML approach.

Collaboration


Dive into the Matthew Toews's collaboration.

Top Co-Authors

Avatar

William M. Wells

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Christian Desrosiers

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Polina Golland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

D. Louis Collins

Montreal Neurological Institute and Hospital

View shared research outputs
Top Co-Authors

Avatar

Alexandra J. Golby

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Kuldeep Kumar

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Bruno Madore

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Jie Luo

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge