Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter M. Williams is active.

Publication


Featured researches published by Peter M. Williams.


Neural Computation | 1995

Bayesian regularization and pruning using a Laplace prior

Peter M. Williams

Standard techniques for improved generalization from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy suggests a Laplace rather than a gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error and (2) those failing to achieve this sensitivity and that therefore vanish. Since the critical value is determined adaptively during training, pruningin the sense of setting weights to exact zerosbecomes an automatic consequence of regularization alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a gaussian regularizer.


Neural Computation | 1996

Using neural networks to model conditional multivariate densities

Peter M. Williams

Neural network outputs are interpreted as parameters of statistical distributions. This allows us to fit conditional distributions in which the parameters depend on the inputs to the network. We exploit this in modeling multivariate data, including the univariate case, in which there may be input-dependent (e.g., time-dependent) correlations between output components. This provides a novel way of modeling conditional correlation that extends existing techniques for determining input-dependent (local) error bars.


IEEE Transactions on Neural Networks | 2007

A Geometrical Method to Improve Performance of the Support Vector Machine

Peter M. Williams; Sheng Li; Jianfeng Feng; Si Wu

The performance of a support vector machine (SVM) largely depends on the kernel function used. This letter investigates a geometrical method to optimize the kernel function. The method is a modification of the one proposed by S. Amari and S. Wu. Its concern is the use of the prior knowledge obtained in a primary step training to conformally rescale the kernel function, so that the separation between the two classes of data is enlarged. The result is that the new algorithm works efficiently and overcomes the susceptibility of the original method


IEEE Transactions on Neural Networks | 2001

The generalization error of the symmetric and scaled support vector machines

Jianfeng Feng; Peter M. Williams

It is generally believed that the support vector machine (SVM) optimizes the generalization error and outperforms other learning machines. We show analytically, by concrete examples in the one dimensional case, that the SVM does improve the mean and standard deviation of the generalization error by a constant factor, compared to the worst learning machine. Our approach is in terms of the extreme value theory and both the mean and variance of the generalization errors are calculated exactly for all the cases considered. We propose a new version of the SVM , called the scaled SVM, which can further reduce the mean of the generalization error of the SVM.


Neural Computing and Applications | 1993

Aeromagnetic compensation using neural networks

Peter M. Williams

Airborne magnetic surveys in geophysical exploration can be subject to interference effects from the aircraft. Principal sources are the permanent magnetism of various parts of the aircraft, induction effects created by the earths magnetic field and eddy-current fields produced by the aircrafts manoeuvres. Neural networks can model these effects as functions of roll, pitch, heading and their time derivatives, together with vertical acceleration, charging currents to the generator, etc., without assuming an explicit physical model. Separation of interference effects from background regional and diurnal fields can also be achieved in a satisfactory way.


Journal of Physics A | 2003

Neuronal discrimination capacity

Yingchun Deng; Peter M. Williams; Feng Liu; Jianfeng Feng

We explore neuronal mechanisms of discriminating between masked signals. It is found that when the correlation between input signals is zero, the output signals are separable if and only if input signals are separable. With positively (negatively) correlated signals, the output signals are separable (mixed) even when input signals are mixed (separable). Exact values of discrimination capacity are obtained for two most interesting cases: the exactly balanced inhibitory and excitatory input case and the uncorrelated input case. Interestingly, the discrimination capacity obtained in these cases is independent of model parameters, input distribution and is universal. Our results also suggest a functional role of inhibitory inputs and correlated inputs or, more generally, the large variability of efferent spike trains observed in in vivo experiments: the larger the variability of efferent spike trains, the easier it is to discriminate between masked input signals.


Neural Networks | 1999

Matrix logarithm parametrizations for neural network covariance models

Peter M. Williams

Neural networks are commonly used to model conditional probability distributions. The idea is to represent distributional parameters as functions of conditioning events, where the function is determined by the architecture and weights of the network. An issue to be resolved is the link between distributional parameters and network outputs. The latter are unconstrained real numbers whereas distributional parameters may be required to lie in proper subsets, or be mutually constrained, e.g. by the positive definiteness requirement for a covariance matrix. The paper explores the matrix-logarithm parametrization of covariance matrices for multivariate normal distributions. From a Bayesian point of view the choice of parametrization is linked to the choice of prior. This is treated by investigating the invariance of predictive distributions, for the chosen parametrization, with respect to an important class of priors.


International Journal of Approximate Reasoning | 1990

An interpretation of Shenoy and Shafer's axioms for local computation

Peter M. Williams

Abstract It is shown that unrenormalized plausibility functions, interpreted as measures of the impact of contrary evidence, satisfy the axioms for local computation proposed by Shenoy and Shafer. It is argued that in cases where the underlying domain is generated as the union of singletons representing most specific descriptions, plausibility functions rather than belief functions are the natural measures of uncertainty. In those cases, decision should be made on the basis of the ratios of plausibility values. The latter are unaffected by renormalization, which is superfluous.


Archive | 2001

Probabilistic Learning Models

Peter M. Williams

The purpose of this review is to provide a brief outline of some uses of Bayesian methods in artificial intelligence, specifically in the area of neural computation.


Energy | 1992

An application of Dempster-Shafer theory to the assessment of biogas technology

Aurora A. Kawahara; Peter M. Williams

We apply the Dempster-Shafer theory of belief functions to the assessment of biogas technology in rural areas of Brazil. Two case studies are discussed in detail and the results compared with a more conventional method of project appraisal. On the computational side, it is shown how local computation and dimensionality reduction, in cases where certain relations hold between variables, can increase efficiency.

Collaboration


Dive into the Peter M. Williams's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Si Wu

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Sheng Li

University of Sussex

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feng Liu

University of Sussex

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge