Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew R. Boutell is active.

Publication


Featured researches published by Matthew R. Boutell.


Pattern Recognition | 2004

Learning multi-label scene classification

Matthew R. Boutell; Jiebo Luo; Xipeng Shen; Christopher M. Brown

In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in semantic scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature.


Pattern Recognition | 2005

Beyond pixels: Exploiting camera metadata for photo classification

Matthew R. Boutell; Jiebo Luo

Semantic scene classification based only on low-level vision cues has had limited success on unconstrained image sets. On the other hand, camera metadata related to capture conditions provide cues independent of the captured scene content that can be used to improve classification performance. We consider three problems, indoor-outdoor classification, sunset detection, and manmade-natural classification. Analysis of camera metadata statistics for images of each class revealed that metadata fields, such as exposure time, flash fired, and subject distance, are most discriminative for each problem. A Bayesian network is employed to fuse content-based and metadata cues in the probability domain and degrades gracefully even when specific metadata inputs are missing (a practical concern). Finally, we provide extensive experimental results on the three problems using content-based and metadata cues to demonstrate the efficacy of the proposed integrated scene classification scheme.


computer vision and pattern recognition | 2004

Bayesian fusion of camera metadata cues in semantic scene classification

Matthew R. Boutell; Jiebo Luo

Semantic scene classification based only on low-level vision cues has had limited success on unconstrained image sets. On the other hand, camera metadata related to capture conditions provides cues independent of the captured scene content that can be used to improve classification performance. We consider two problems: indoor-outdoor classification and sunset detection. Analysis of camera metadata statistics for images of each class revealed that metadata fields, such as exposure time, flash fired, and subject distance, is most discriminative for both indoor-outdoor and sunset classification. A Bayesian network is employed to fuse content-based and metadata cues in the probability domain and degrades gracefully, even when specific metadata inputs are missing (a practical concern). Finally, we provide extensive experimental results on the two problems, using content-based and metadata cues to demonstrate the efficacy of the proposed integrated scene classification scheme.


electronic imaging | 2003

Multilabel machine learning and its application to semantic scene classification

Xipeng Shen; Matthew R. Boutell; Jiebo Luo; Christopher M. Brown

In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature.


IEEE Signal Processing Magazine | 2006

Pictures are not taken in a vacuum - an overview of exploiting context for semantic scene content understanding

Jiebo Luo; Matthew R. Boutell; Christopher M. Brown

Considerable research has been devoted to the problem of multimedia indexing and retrieval in the past decade. However, limited by state-of-the-art in image understanding, the majority of the existing content-based image retrieval (CBIR) systems have taken a relatively low-level approach and fallen short of higher-level interpretation and knowledge. Recent research has begun to focus on bridging the semantic and conceptual gap that exists between man and computer by integrating knowledge-based techniques, human perception, scene content understanding, psychology, and linguistics. In this article, we provide an overview of exploiting context for semantic scene content and understanding


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Automatic image orientation detection via confidence-based integration of low-level and semantic cues

Jiebo Luo; Matthew R. Boutell

Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.


international conference on pattern recognition | 2004

Photo classification by integrating image content and camera metadata

Matthew R. Boutell; Jiebo Luo

Despite years of research, semantic classification of unconstrained photos is still an open problem. Existing systems have only used features derived from the image content. However, Exif metadata recorded by the camera provides cues independent of the scene content that can be exploited to improve classification accuracy. Using the problem of indoor-outdoor classification as an example, analysis of metadata statistics for each class revealed that exposure time, flash use, and subject distance are salient cues. We use a Bayesian network to integrate heterogeneous (content-based and metadata) cues in a robust fashion. Based on extensive experimental results, we make two observations: (1) adding metadata to content-based cues gives highest accuracies; and (2) metadata cues alone can outperform content-based cues alone for certain applications, leading to a system with high performance, yet requiring very little computational overhead. The benefit of incorporating metadata cues can be expected to generalize to other scene classification problems.


systems man and cybernetics | 2005

Image transform bootstrapping and its applications to semantic scene classification

Jiebo Luo; Matthew R. Boutell; Robert T. Gray; Christopher M. Brown

The performance of an exemplar-based scene classification system depends largely on the size and quality of its set of training exemplars, which can be limited in practice. In addition, in nontrivial data sets, variations in scene content as well as distracting regions may exist in many testing images to prohibit good matches with the exemplars. Various boosting schemes have been proposed in machine learning, focusing on the feature space. We introduce the novel concept of image-transform bootstrapping using transforms in the image space to address such issues. In particular, three major schemes are described for exploiting this concept to augment training, testing, and both. We have successfully applied it to three applications of increasing difficulty: sunset detection, outdoor scene classification, and automatic image orientation detection. It is shown that appropriate transforms and meta-classification methods can be selected to boost performance according to the domain of the problem and the features/classifier used.


computer vision and pattern recognition | 2006

Factor Graphs for Region-based Whole-scene Classification

Matthew R. Boutell; Jiebo Luo; Christopher M. Brown

Semantic scene classification is still a challenging problem in computer vision. In contrast to the common approach of using low-level features computed from the scene, our approach uses explicit semantic object detectors and scene configuration models. To overcome faulty semantic detectors, it is critical to develop a region-based, generative model of outdoor scenes based on characteristic objects in the scene and spatial relationships between them. Since a fully connected scene configuration model is intractable, we chose to model pairwise relationships between regions and estimate scene probabilities using loopy belief propagation on a factor graph. We demonstrate the promise of this approach on a set of over 2000 outdoor photographs, comparing it with existing discriminative approaches and those using low-level features.


international conference on multimedia and expo | 2005

Improved semantic region labeling based on scene context

Matthew R. Boutell; Jiebo Luo; Christopher M. Brown

Semantic region labeling in outdoor scenes, e.g., identifying sky, grass, foliage, water, and snow, facilitates content-based image retrieval, organization, and enhancement. A major limitation of current object detectors is the significant number of misclassifications due to the similarities in color and texture characteristics of various object types and lack of context information. Building on previous work of spatial context-aware object detection, we have developed a further improved system by modeling and enforcing spatial context constraints specific to individual scene type. In particular, the scene context, in the form of factor graphs, is obtained by learning and subsequently used via MAP estimation to reduce misclassification by constraining the object detection beliefs to conform to the spatial context models. Experimental results show that the richer spatial context models improve the accuracy of object detection over the individual object detectors and the general outdoor scene model.

Collaboration


Dive into the Matthew R. Boutell's collaboration.

Top Co-Authors

Avatar

Jiebo Luo

Eastman Kodak Company

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xipeng Shen

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

David Fisher

Rose-Hulman Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Almajed

Rose-Hulman Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Ayers

Rose-Hulman Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge