Olaf Bunke
Humboldt University of Berlin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Olaf Bunke.
Technometrics | 1984
Olaf Bunke; Bernd Droge
If a linear regression model is used for prediction, the mean squared error of prediction (MSEP) measures the performance of the model. The MSEP is a function of unknown parameters and good estimates of it are of interest. This article derives a best unbiased estimator and a minimum MSE estimator under the assumption of a normal distribution. It compares the bias and the MSE of these estimators and some others. Similar results are presented for the case in which the model is used to estimate values of the response function.
Statistics | 1974
H. Bunke; Olaf Bunke
A general theory of parameter identifiability unbiased decision functions and estimable optimal decision sets is developed covering the usual concepts of identifiability, unbiasedness and estimability. For the estimation of linear parameters in multivariate linear models, the concepts of linear estimability and identifiability coincide, and with a suitable choice of the loss function every linear parameter can be viewed as estimable and identifiable. It is shown, that the condition of reducibility used by H. Bunke to construct a solution of the approgression problem is identifiability of the projection of the unknown regression function on the space of approximating functions.
Statistics | 1999
Olaf Bunke; Bernd Droge; Jörg Polzehl
The results of analyzing experimental data using a parametric model may heavily depend on the chosen model for regression and variance functions, moreover also on a possibly underlying preliminary transformation of the variables. In this paper we propose and discuss a complex procedure which consists in a simultaneous selection of parametric regression and variance models from a relatively rich model class and of Box-Cox variable transformations by minimization of a cross-validation criterion. For this it is essential to introduce modifications of the standard cross-validation criterion adapted to each of the following objectives: 1. estimation of the unknown regression function, 2. prediction of future values of the response variable, 3. calibration or 4. estimation of some parameter with a certain meaning in the corresponding field of application. Our idea of a criterion oriented combination of procedures (which usually if applied, then in an independent or sequential way) is expected to lead to more ac...
Statistics | 1973
Olaf Bunke
A theory of model choice in regression analysis is developed, covering tho problems of approximation of regression functions with unknown functional form, of optimal prediction for the realization of some dependend variables, of polynomial and multiple regression. In this (decision theoretical) frame a survey of the known model choice procedures including stepwise procedures and their variants is given. Moreover, several new procedures and variants are described, e.g. BAYEsian, minimax and empirical model choice, global and robust backward elimination or stepwise regression. The procedures of “maximal multiple correlation”“optimal regression” and the C p-criterion of MALLOWS are obtained as special cases of ∊-BAYES, BAYES and empirical model choice respectively.Some comparisons of procedures considering the risk function are reported, e.g. under some assumptions the lower global variant of forward selection is better than the choice of the largest model.The parameters of this procedure can be chosen in su...
Statistics | 1985
Olaf Bunke
We present an estimator for the vector p of cell probabilities in a k-dimensional contingency table, which is based on smoothing an estimate of the vector q describing the interpendences, while estimating the one-dimensional marginal distributions by the observed frequencies. The background for this estimator is the realistic situation, in which no reliable prior information is available on the structure of the contingency table or the corresponding probabilities and there is at most the hope, that the q values at neighbouring.
Statistics | 2005
Olaf Bunke
Bayes estimates are derived in multivariate linear models with unknown distribution. The prior distribution is defined using a Dirichlet prior for the unknown error distribution and a normal-Wishart distribution for the parameters. The posterior distribution is determined and explicit expressions are given in the special cases of location-scale and two-sample models. The calculation of self-informative limits of Bayes estimates yields standard estimates.
Archive | 2001
Steffen Brenner; Olaf Bunke; Bernd Droge; Joachim Schwalbach
We examine the impact of performance groups on the estimation of the relative importance of firm, industry and other effects on corporate performance. Performance groups comprise firms from the same industry with a similar performance over a longer period of time. We present a statistical method which improves the procedure of variance decomposition by allowing firm effects and the interacting effects of firms and time to be unified into the group effects. Applied to a German data set of 219 companies observed over a period of eleven years (1987-1997) it appears that the majority of the firms can be ascribed to performance groups. The variance proportion of the group effects is about one half of the non-grouped firm effects. They explain about 17.9 percent of the total variance of the returns.
Statistics | 2005
Olaf Bunke; Jan Johannes
A definition of selfinformative Bayes carriers or limits is given as a description of an approach to non-informative Bayes estimation in non- and semiparametric models. It takes the posterior w.r.t. a prior as a new prior and repeats this procedure again and again. A main objective of this article is to clarify the relation between selfinformative carriers or limits and maximum likelihood estimates (MLEs). For a model with dominated probability distributions, we state sufficient conditions under which the set of MLEs is a selfinformative carrier or in the case of a unique MLE its selfinformative limit property. Mixture models are covered. The result on carriers is extended to more general models without dominating measure. Selfinformative limits, in the case of estimation, of hazard functions based in censored observations and in the case of normal linear models with possibly non-identifiable parameters are shown to be identical to the generalized MLEs in the sense of Gill [Gill, R.D., 1989, Non- and semi-parametric maximum likelihood estimators and the von Mises method. I. Scandinanian Journal of Statistics, 16(2), 97–128.] and Kiefer and Wolfowitz [Kiefer, J. and Wolfowitz, J., 1956, Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. Annals of Mathematical Statistics, 27, 887–906.]. Selfinformative limits are given for semiparametric linear models. For a location model, they are identical to generalized MLEs, while this is not true in general.
Statistics | 1999
Olaf Bunke
Multivariate linear models with ellipsoidal restrictions are introduced for the modelling of semiparametric regression situations with smooth regression functions. Nonparametric and generalized additive models are covered as special cases. Minimax linear estimators for linear parameters in these multivariate linear models are presented under different forms of the risk function. Quadratic minimax bias estimators for the covariance matrix are derived and are illustrated in special examples. In the univariate case a simple complete class of quadratic estimators is presented and it is used for the derivation of a minimax quadratic conditionally unbiased estimator for the covariance. Nonparametric special cases and adaptive modifications of both estimators are discussed.
Statistics | 1993
Olaf Bunke
Ordinary or weighted jackknife variance or bias estimates may be very inefficient. We show this in the k-sample model, where their risks are k times larger than for the estimates from asymptotic theory. We propose “extended jackknife estimates” intended to overcome this possible inefficiency. Indeed in the k-sample model they are identical to the “asymptotic” estimates which are also best unbiased and bootstrap estimators. This we show even for general linear models. Under a nonlinear regression model we get a high order asymptotic equivalence between extended jackknife and asymptotic estimates. A considerable small sample improvement over the ordinary or weighted jackknife may be expected, at least for models with a structure near to that of the k-sample problem. The estimation of the mean and the median of the absolute error of a one-dimensional estimator are shortly discussed from the small and the large sample point of view.