D. M. Titterington
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by D. M. Titterington.
Bayesian Analysis | 2006
Gilles Celeux; F. Forbes; Christian P. Robert; D. M. Titterington
The deviance information criterion (DIC) introduced by Spiegelhalteret al. (2002) is directly inspired by linear and generalised linear models,but it is not so naturally de ned for missing data models. In this paper,we reassess the criterion for such models, testing the behaviour of variousextensions in the cases of mixture and random e ect models.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1991
Alan M. Thompson; John C. Brown; Jim Kay; D. M. Titterington
The method of regularization is portrayed as providing a compromise between fidelity to the data and smoothness, with the tradeoff being determined by a scalar smoothing parameter. Various ways of choosing this parameter are discussed in the case of quadratic regularization criteria. They are compared algebraically, and their statistical properties are comparatively assessed from the results of all extensive simulation study based on simple images. >
Computational Statistics & Data Analysis | 2007
Clare A. McGrory; D. M. Titterington
Variational methods, which have become popular in the neural computing/machine learning literature, are applied to the Bayesian analysis of mixtures of Gaussian distributions. It is also shown how the deviance information criterion, (DIC), can be extended to these types of model by exploiting the use of variational approximations. The use of variational methods for model selection and the calculation of a DIC are illustrated with real and simulated data. The variational approach allows the simultaneous estimation of the component parameters and the model complexity. It is found that initial selection of a large number of components results in superfluous components being eliminated as the method converges to a solution. This corresponds to an automatic choice of model complexity. The appropriateness of this is reflected in the DIC values.
Philosophical Transactions of the Royal Society A | 1991
W. Qian; D. M. Titterington
Parameter estimation from noisy versions of realizations of Markov models is extremely difficult in all but very simple examples. The paper identifies these difficulties, reviews ways of coping with them in practice, and discusses in detail a class of methods with a Monte Carlo flavour. Their performance on simple examples suggests that they should be valuable, practically feasible procedures in the context of a range of otherwise intractable problems. An illustration is provided based on satellite data.
Philosophical Transactions of the Royal Society A | 2009
Johnstone Im; D. M. Titterington
Modern applications of statistical theory and methods can involve extremely large datasets, often with huge numbers of measurements on each of a comparatively small number of experimental units. New methodology and accompanying theory have emerged in response: the goal of this Theme Issue is to illustrate a number of these recent developments. This overview article introduces the difficulties that arise with high-dimensional data in the context of the very familiar linear statistical model: we give a taste of what can nevertheless be achieved when the parameter vector of interest is sparse, that is, contains many zero elements. We describe other ways of identifying low-dimensional subspaces of the data space that contain all useful information. The topic of classification is then reviewed along with the problem of identifying, from within a very large set, the variables that help to classify observations. Brief mention is made of the visualization of high-dimensional data and ways to handle computational problems in Bayesian analysis are described. At appropriate points, reference is made to the other papers in the issue.
Technometrics | 1992
Peter Hall; D. M. Titterington
An alternative procedure is developed to the smoothed linear fitting method of McDonald and Owen. The procedure is based on the detection of discontinuities by comparing, at any given position, three smooth fits. Diagnostics are used to detect discontinuities in the regression function itself (edge detection) or in its first derivative (peak detection). An application in electron microscopy is discussed.
Journal of Multivariate Analysis | 1988
Peter Hall; D. M. Titterington
We describe a unified approach to the construction of confidence bands in nonparametric density estimation and regression. Our techniques are based on interpolation formulae in numerical differentiation, and our arguments generate a variety of bands depending on the assumptions one is prepared to make about derivatives of the unknown function. The bands are simultaneous, in the sense that they contain the entire function with probability at least an amount. The order of magnitude of the minimum width of any confidence band is described, and our bands are shown to achieve that order. Examples illustrate applications of the technique.
Statistics and Computing | 2005
Jian Qing Shi; Roderick Murray-Smith; D. M. Titterington
As a result of their good performance in practice and their desirable analytical properties, Gaussian process regression models are becoming increasingly of interest in statistics, engineering and other fields. However, two major problems arise when the model is applied to a large data-set with repeated measurements. One stems from the systematic heterogeneity among the different replications, and the other is the requirement to invert a covariance matrix which is involved in the implementation of the model. The dimension of this matrix equals the sample size of the training data-set. In this paper, a Gaussian process mixture model for regression is proposed for dealing with the above two problems, and a hybrid Markov chain Monte Carlo (MCMC) algorithm is used for its implementation. Application to a real data-set is reported.
IEEE Transactions on Image Processing | 1995
G. Archer; D. M. Titterington
Methods are reviewed for choosing regularized restorations in image processing. In particular, a method developed by Galatsanos and Katsaggelos (see ibid., vol.1, p.322-336, 1992) is given a Bayesian interpretation and is compared with other Bayesian and non-Bayesian alternatives. A small illustrative example is provided and a complement is provided for the discussion of noise variance estimation of Galatsanos et al.
Machine Learning | 2003
Ernest Fokoué; D. M. Titterington
Factor Analysis (FA) is a well established probabilistic approach to unsupervised learning for complex systems involving correlated variables in high-dimensional spaces. FA aims principally to reduce the dimensionality of the data by projecting high-dimensional vectors on to lower-dimensional spaces. However, because of its inherent linearity, the generic FA model is essentially unable to capture data complexity when the input space is nonhomogeneous. A finite Mixture of Factor Analysers (MFA) is a globally nonlinear and therefore more flexible extension of the basic FA model that overcomes the above limitation by combining the local factor analysers of each cluster of the heterogeneous input space. The structure of the MFA model offers the potential to model the density of high-dimensional observations adequately while also allowing both clustering and local dimensionality reduction. Many aspects of the MFA model have recently come under close scrutiny, from both the likelihood-based and the Bayesian perspectives. In this paper, we adopt a Bayesian approach, and more specifically a treatment that bases estimation and inference on the stochastic simulation of the posterior distributions of interest. We first treat the case where the number of mixture components and the number of common factors are known and fixed, and we derive an efficient Markov Chain Monte Carlo (MCMC) algorithm based on Data Augmentation to perform inference and estimation. We also consider the more general setting where there is uncertainty about the dimensionalities of the latent spaces (number of mixture components and number of common factors unknown), and we estimate the complexity of the model by using the sample paths of an ergodic Markov chain obtained through the simulation of a continuous-time stochastic birth-and-death point process. The main strengths of our algorithms are that they are both efficient (our algorithms are all based on familiar and standard distributions that are easy to sample from, and many characteristics of interest are by-products of the same process) and easy to interpret. Moreover, they are straightforward to implement and offer the possibility of assessing the goodness of the results obtained. Experimental results on both artificial and real data reveal that our approach performs well, and can therefore be envisaged as an alternative to the other approaches used for this model.