Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Stutz is active.

Publication


Featured researches published by John Stutz.


international conference on machine learning | 1988

AutoClass: a Bayesian classification system

Peter Cheeseman; James Kelly; Matthew Self; John Stutz; Will Taylor; Don Freeman

This paper describes AutoClass II, a program for automatically discovering (inducing) classes from a database, based on a Bayesian statistical technique which automatically determines the most probable number of classes, their probabilistic descriptions, and the probability that each object is a member of each class. AutoClass has been tested on several large, real databases and has discovered previously unsuspected classes. There is no doubt that these classes represent new phenomena.


Archive | 1996

Super-Resolved Surface Reconstruction from Multiple Images

Peter Cheeseman; Bob Kanefsky; Richard Kraft; John Stutz; Robin Hanson

This paper describes a Bayesian method for constructing a super-resolved surface model by combining information from a set of images of the given surface. We develop the theory and algorithms in detail for the 2-D reconstruction problem, appropriate for the case where all images are taken from roughly the same direction and under similar lighting conditions. We show the results of this 2-D reconstruction on Viking Martian data. These results show dramatic improvements in both spatial and gray-scale resolution. The Bayesian approach uses a neighbor correlation model as well as pixel data from the image set. Some extensions of this method are discussed, including 3-D surface reconstruction and the resolution of diffraction blurred images.


Journal of Artificial Intelligence Research | 1997

When gravity fails: local search topology

Jeremy Frank; Peter Cheeseman; John Stutz

Local search algorithms for combinatorial search problems frequently encounter a sequence of states in which it is impossible to improve the value of the objective function; moves through these regions, called plateau moves, dominate the time spent in local search. We analyze and characterize plateaus for three different classes of randomly generated Boolean Satisfiability problems. We identify several interesting features of plateaus that impact the performance of local search algorithms. We show that local minima tend to be small but occasionally may be very large. We also show that local minima can be escaped without unsatisfying a large number of clauses, but that systematically searching for an escape route may be computationally expensive if the local minimum is large. We show that plateaus with exits, called benches, tend to be much larger than minima, and that some benches have very few exit states which local search can use to escape. We show that the solutions (i.e., global minima) of randomly generated problem instances form clusters, which behave similarly to local minima. We revisit several enhancements of local search algorithms and explain their performance in light of our results. Finally we discuss strategies for creating the next generation of local search algorithms.


Archive | 1996

Autoclass — A Bayesian Approach to Classification

John Stutz; Peter Cheeseman

We describe a Bayesian approach to the unsupervised discovery of classes in a set of cases, sometimes called finite mixture separation or clustering. The main difference between clustering and our approach is that we search for the “best” set of class descriptions rather than grouping the cases themselves. We describe our classes in terms of probability distribution or density functions, and the locally maximal posterior probability parameters. We rate our classifications with an approximate posterior probability of the distribution function w.r.t. the data, obtained by marginalizing over all the parameters. Approximation is necessitated by the computational complexity of the joint probability, and our marginalization is w.r.t. a local maxima in the parameter space. This posterior probability rating allows direct comparison of alternate density functions that differ in number of classes and/or individual class density functions.


BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 24th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering | 2004

On The Relationship between Bayesian and Maximum Entropy Inference

Peter Cheeseman; John Stutz

We investigate Bayesian and Maximum Entropy methods for doing inference under uncertainty. This investigation is primarily through concrete examples that have been previously investigated in the literature. We find that it is possible to do Bayesian and MaxEnt inference using the same information, despite claims to the contrary, and that they lead to different results. We find that these differences are due to the Bayesian inference not assuming anything beyond the given prior probabilities and the data, whereas MaxEnt implicitly makes strong independence assumptions, and assumes that the given constraints are the only ones operating. We also show that maximum likelihood and maximum a posteriori estimators give different and misleading estimates in our examples compared to posterior mean estimates. We generalize the classic method of maximum entropy inference to allow for uncertainty in the constraint values. This generalized MaxEnt (GME) makes MaxEnt inference applicable to a much wider range of problems...


computer vision and pattern recognition | 2004

Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

André Jalobeanu; Frank O. Kuehnel; John Stutz

Generative models of natural images have long been used in computer vision. However, since they only describe the statistics of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks, a 3D scene model is needed when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, that accounts for the physics of image generation. We focus on the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also discuss how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and planetary topography recovery, illustrated by promising results on real images.


BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 25th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering | 2005

Generalized Maximum Entropy

Peter Cheeseman; John Stutz

A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes, is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density (e.g. a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaxEnt probabilities. We illustrate this approach by explic...


BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 25th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering | 2005

Experience With Bayesian Image Based Surface Modeling

John Stutz

Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional vusual surface recovery techniques. But it offers the possibility of generating a single surface model that fuses all available information, from any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.


knowledge discovery and data mining | 1996

Bayesian classification (AutoClass): theory and results

Peter Cheeseman; John Stutz


national conference on artificial intelligence | 1988

Bayesian classification

Peter Cheeseman; Matthew Self; Jim Kelly; Will Taylor; Don Freeman; John Stutz

Collaboration


Dive into the John Stutz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robin Hanson

George Mason University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Volk

Space Telescope Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helen J. Walker

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge