Gerhard Winkler
Ludwig Maximilian University of Munich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gerhard Winkler.
Archive | 1995
Gerhard Winkler
In this chapter, basic properties of maximum likelihood estimators are derived and a useful generalization is obtained. Only results for general finite spaces X are presented. Parameter estimation for Gibbs fields is discussed in the next chapter.
Mathematische Nachrichten | 2000
Gerhard Winkler
The paper deals with sets of distributions which are given by moment conditions for the distributions and convex constraints on derivatives of their c.d.fs. A general albeit simple method for the study of their extremal structure, extremal decomposition and topological or measure theoretical properties is developed. Its power is demonstrated by the application to bell-shaped distributions. Extreme points of their moment sets are characterized completely (thus filling a gap in the previous theory) and inequalities of Tchebysheff type are derived by means of general integral representation theorems. Some key words: Moment sets, Tschebysheff inequalities, extremal bell-shaped distributions
Archive | 1990
Heinrich von Weizsäcker; Gerhard Winkler
Stochastic integral processes will be constructed by approximation. One first defines the integral process in a straightforward manner for elementary integrands. Then one shows that the approximation of more general integrands by elementary ones yields a convergent sequence of the corresponding integral processes. The details depend on the choice of the respective spaces and concepts of convergence. In this chapter we discuss the part of the arguments which can be formulated without explicit reference to stochastic integrals. If you have ploughed through this somewhat technical chapter the way to the stochastic integral is open.
Archive | 2003
Gerhard Winkler
Relaxation techniques can also be considered in continuous space and/or time. Sampling and annealing are embedded into the framework of continuous-time Markov and diffusion processes. It would take quite a bit of space and time to present such advanced topics. Therefore, we just sketch some basic ideas and indicate some of their implications.
Archive | 2003
Gerhard Winkler
In this chapter, the Gibbs sampler is established and simulated annealing based on the Gibbs sampler is discussed. This is sufficient for many applications in imaging like sampling from a Gibbs field, or the computation of posterior minimum mean square or maximum posterior mode estimates. The Gibbs sampler is a Markov chain constructed from conditional distributions of the target Gibbs field. Hence the space X of configurations x will be the product of finite state spaces Xs, s ∈ S, with a finite set S of sites.
Archive | 2003
Gerhard Winkler
In this chapter, basic properties of estimators are collected. Gibbs fields are examined in the next chapter. Since the product structure of the sample space does not play any role for these considerations, let X be any finite set.
Archive | 1995
Gerhard Winkler
We focus now on maximum likelihood estimators for Markov random field models. This amounts to the study of exponential families on finite spaces X like in the last chapter, with the difference that the product structure of these spaces plays a crucial role.
Archive | 1995
Gerhard Winkler
The results from Chapter 5 will be generalized in several respects: (i) Single-site visiting schemes are replaced by schemes selecting subsets of sites. (ii) The functions H n = β(n)H or H n = H are replaced by more general functions.
Archive | 1995
Gerhard Winkler
The aim of the present chapter is the illustration and discussion of the previously introduced concepts. We continue with the discussion of noise reduction or image restoration started in the introduction. This specific example is chosen since it can easily be described and there is no need for further theory.
Archive | 1995
Gerhard Winkler
The aim of statistical inference is the explanation of data and the extraction of information. If nothing is known about data they can only be stored (and will probably be lost as time goes by). Fortunately, there usually is some prior information. It may be fitted into a model which henceforth serves as a basis for statistical inference.