Featured Researches

Bayesian Analysis

Bayes linear adjustment for variance matrices

We examine the problem of covariance belief revision using a geometric approach. We exhibit an inner-product space where covariance matrices live naturally --- a space of random real symmetric matrices. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability specifications.

Read more
Bayesian Analysis

Bayes linear covariance matrix adjustment

In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.

Read more
Bayesian Analysis

Bayes linear covariance matrix adjustment for multivariate dynamic linear models

A methodology is developed for the adjustment of the covariance matrices underlying a multivariate constant time series dynamic linear model. The covariance matrices are embedded in a distribution-free inner-product space of matrix objects which facilitates such adjustment. This approach helps to make the analysis simple, tractable and robust. To illustrate the methods, a simple model is developed for a time series representing sales of certain brands of a product from a cash-and-carry depot. The covariance structure underlying the model is revised, and the benefits of this revision on first order inferences are then examined.

Read more
Bayesian Analysis

Bayes linear variance adjustment for time series

This paper exhibits quadratic products of linear combinations of observables which identify the covariance structure underlying the univariate locally linear time series dynamic linear model. The first- and second-order moments for the joint distribution over these observables are given, allowing Bayes linear learning for the underlying covariance structure for the time series model. An example is given which illustrates the methodology and highlights the practical implications of the theory.

Read more
Bayesian Analysis

Bayesian Method of Moments (BMOM) Analysis of Mean and Regression Models

A Bayesian method of moments/instrumental variable (BMOM/IV) approach is developed and applied in the analysis of the important mean and multiple regression models. Given a single set of data, it is shown how to obtain posterior and predictive moments without the use of likelihood functions, prior densities and Bayes' Theorem. The posterior and predictive moments, based on a few relatively weak assumptions, are then used to obtain maximum entropy densities for parameters, realized error terms and future values of variables. Posterior means for parameters and realized error terms are shown to be equal to certain well known estimates and rationalized in terms of quadratic loss functions. Conditional maxent posterior densities for means and regression coefficients given scale parameters are in the normal form while scale parameters' maxent densities are in the exponential form. Marginal densities for individual regression coefficients, realized error terms and future values are in the Laplace or double-exponential form with heavier tails than normal densities with the same means and variances. It is concluded that these results will be very useful, particularly when there is difficulty in formulating appropriate likelihood functions and prior densities needed in traditional maximum likelihood and Bayesian approaches.

Read more
Bayesian Analysis

Bayesian Variable Selection with Related Predictors

In data sets with many predictors, algorithms for identifying a good subset of predictors are often used. Most such algorithms do not account for any relationships between predictors. For example, stepwise regression might select a model containing an interaction AB but neither main effect A or B. This paper develops mathematical representations of this and other relations between predictors, which may then be incorporated in a model selection procedure. A Bayesian approach that goes beyond the standard independence prior for variable selection is adopted, and preference for certain models is interpreted as prior information. Priors relevant to arbitrary interactions and polynomials, dummy variables for categorical factors, competing predictors, and restrictions on the size of the models are developed. Since the relations developed are for priors, they may be incorporated in any Bayesian variable selection algorithm for any type of linear model. The application of the methods here is illustrated via the Stochastic Search Variable Selection algorithm of George and McCulloch (1993), which is modified to utilize the new priors. The performance of the approach is illustrated with two constructed examples and a computer performance dataset. Keywords: Model Selection, Prior Distributions, Interaction, Dummy Variable

Read more
Bayesian Analysis

Confidence Intervals from One One Observation

Robert Machol's surprising result, that from a single observation it is possible to have finite length confidence intervals for the parameters of location-scale models, is re-produced and extended. Two previously unpublished modifications are included. First, Herbert Robbins nonparametric confidence interval is obtained. Second, I introduce a technique for obtaining confidence intervals for the scale parameter of finite length in the logarithmic metric. Keywords: Theory/Foundations , Estimation, Prior Distributions, Non-parametrics & Semi-parametrics Geometry of Inference, Confidence Intervals, Location-Scale models

Read more
Bayesian Analysis

Local computation of influence propagation through Bayes linear belief networks

In recent years there has been interest in the theory of local computation over probabilistic Bayesian graphical models. In this paper, local computation over Bayes linear belief networks is shown to be amenable to a similar approach. However, the linear structure offers many simplifications and advantages relative to more complex models, and these are examined with reference to some illustrative examples.

Read more
Bayesian Analysis

Minimal information in velocity space

Jaynes' transformation group principle is used to derive the objective prior for the velocity of a non-zero rest-mass particle. In the case of classical mechanics, invariance under the classical law of addition of velocities, leads to an improper constant prior over the unbounded velocity space of classical mechanics. The application of the relativistic law of addition of velocities leads to a less simple prior. It can however be rewritten as a uniform volumetric distribution if the relativistic velocity space is given a non-trivial metric.

Read more
Bayesian Analysis

Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation

Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n^2 iterations. Such random walks can sometimes be suppressed using ``overrelaxed'' variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model.

Read more

Ready to get started?

Join us today