Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Faming Liang is active.

Publication


Featured researches published by Faming Liang.


Journal of the American Statistical Association | 2000

The Multiple-Try Method and Local Optimization in Metropolis Sampling

Jun S. Liu; Faming Liang; Wing Hung Wong

Abstract This article describes a new Metropolis-like transition rule, the multiple-try Metropolis, for Markov chain Monte Carlo (MCMC) simulations. By using this transition rule together with adaptive direction sampling, we propose a novel method for incorporating local optimization steps into a MCMC sampler in continuous state-space. Numerical studies show that the new method performs significantly better than the traditional Metropolis-Hastings (M-H) sampler. With minor tailoring in using the rule, the multiple-try method can also be exploited to achieve the effect of a griddy Gibbs sampler without having to bear with griddy approximations, and the effect of a hit-and-run algorithm without having to figure out the required conditional distribution in a random direction.


Journal of Chemical Physics | 2001

Evolutionary Monte Carlo for protein folding simulations

Faming Liang; Wing Hung Wong

We demonstrate that evolutionary Monte Carlo (EMC) can be applied successfully to simulations of protein folding on simple lattice models, and to finding the ground state of a protein. In all cases, EMC is faster than the genetic algorithm and the conventional Metropolis Monte Carlo, and in several cases it finds new lower energy states. We also propose one method for the use of secondary structures in protein folding. The numerical results show that it is drastically superior to other methods in finding the ground state of a protein.


Journal of the American Statistical Association | 2001

Real-Parameter Evolutionary Monte Carlo With Applications to Bayesian Mixture Models

Faming Liang; Wing Hung Wong

We propose an evolutionary Monte Carlo algorithm to sample from a target distribution with real-valued parameters. The attractive features of the algorithm include the ability to learn from the samples obtained in previous steps and the ability to improve the mixing of a system by sampling along a temperature ladder. The effectiveness of the algorithm is examined through three multimodal examples and Bayesian neural networks. The numerical results confirm that the real-coded evolutionary algorithm is a promising general approach for simulation and optimization.


Journal of the American Statistical Association | 2007

Stochastic Approximation in Monte Carlo Computation

Faming Liang; Chuanhai Liu; Raymond J. Carroll

The Wang–Landau (WL) algorithm is an adaptive Markov chain Monte Carlo algorithm used to calculate the spectral density for a physical system. A remarkable feature of the WL algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature has led to many successful applications of the algorithm in statistical physics and biophysics; however, there does not exist rigorous theory to support its convergence, and the estimates produced by the algorithm can reach only a limited statistical accuracy. In this article we propose the stochastic approximation Monte Carlo (SAMC) algorithm, which overcomes the shortcomings of the WL algorithm. We establish a theorem concerning its convergence. The estimates produced by SAMC can be improved continuously as the simulation proceeds. SAMC also extends applications of the WL algorithm to continuum systems. The potential uses of SAMC in statistics are discussed through two classes of applications, importance sampling and model selection. The results show that SAMC can work as a general importance sampling algorithm and a model selection sampler when the model space is complex.


Journal of Statistical Computation and Simulation | 2010

A double Metropolis–Hastings sampler for spatial models with intractable normalizing constants

Faming Liang

The problem of simulating from distributions with intractable normalizing constants has received much attention in recent literature. In this article, we propose an asymptotic algorithm, the so-called double Metropolis–Hastings (MH) sampler, for tackling this problem. Unlike other auxiliary variable algorithms, the double MH sampler removes the need for exact sampling, the auxiliary variables being generated using MH kernels, and thus can be applied to a wide range of problems for which exact sampling is not available. For the problems for which exact sampling is available, it can typically produce the same accurate results as the exchange algorithm, but using much less CPU time. The new method is illustrated by various spatial models.


Statistics and Computing | 2005

Bayesian neural networks for nonlinear time series forecasting

Faming Liang

In this article, we apply Bayesian neural networks (BNNs) to time series analysis, and propose a Monte Carlo algorithm for BNN training. In addition, we go a step further in BNN model selection by putting a prior on network connections instead of hidden units as done by other authors. This allows us to treat the selection of hidden units and the selection of input variables uniformly. The BNN model is compared to a number of competitors, such as the Box-Jenkins model, bilinear model, threshold autoregressive model, and traditional neural network model, on a number of popular and challenging data sets. Numerical results show that the BNN model has achieved a consistent improvement over the competitors in forecasting future values. Insights on how to improve the generalization ability of BNNs are revealed in many respects of our implementation, such as the selection of input variables, the specification of prior distributions, and the treatment of outliers.


Water Resources Research | 2009

Estimating uncertainty of streamflow simulation using Bayesian neural networks

Xuesong Zhang; Faming Liang; Raghavan Srinivasan; Michael Van Liew

[1] Recent studies have shown that Bayesian neural networks (BNNs) are powerful tools for providing reliable hydrologic prediction and quantifying the prediction uncertainty. The reasonable estimation of the prediction uncertainty, a valuable tool for decision making to address water resources management and design problems, is influenced by the techniques used to deal with different uncertainty sources. In this study, four types of BNNs with different treatments of the uncertainties related to parameters (neural network’s weights) and model structures were applied for uncertainty estimation of streamflow simulation in two U.S. Department of Agriculture Agricultural Research Service watersheds (Little River Experimental Watershed in Georgia and Reynolds Creek Experimental Watershed in Idaho). An advanced Markov chain Monte Carlo algorithm, evolutionary Monte Carlo, was used to train the BNNs and to estimate uncertainty limits of streamflow simulation. The results obtained in these two case study watersheds show that the 95% uncertainty limits estimated by different types of BNNs are different from each other. The BNNs that only consider the parameter uncertainty with noninformative prior knowledge contain the least number of observed streamflow data in their 95% uncertainty bound. By considering variable model structure and informative prior knowledge, the BNNs can provide more reasonable quantification of the uncertainty of streamflow simulation. This study stresses the need for improving understanding and quantifying methods of different uncertainty sources for effective estimation of uncertainty of hydrologic simulation using BNNs.


Journal of Chemical Physics | 2004

Annealing contour Monte Carlo algorithm for structure optimization in an off-lattice protein model.

Faming Liang

We present a space annealing version for a contour Monte Carlo algorithm and show that it can be applied successfully to finding the ground states for an off-lattice protein model. The comparison shows that the algorithm has made a significant improvement over the pruned-enriched-Rosenbluth method and the Metropolis Monte Carlo method in finding the ground states for AB models. For all sequences, the algorithm has renewed the putative ground energy values in the two-dimensional AB model and set the putative ground energy values in the three-dimensional AB model.


Journal of the American Statistical Association | 2005

A Generalized Wang-Landau Algorithm for Monte Carlo Computation

Faming Liang

Inference for a complex system with a rough energy landscape is a central topic in Monte Carlo computation. Motivated by the successes of the Wang–Landau algorithm in discrete systems, we generalize the algorithm to continuous systems. The generalized algorithm has some features that conventional Monte Carlo algorithms do not have. First, it provides a new method for Monte Carlo integration based on stochastic approximation; second, it is an excellent tool for Monte Carlo optimization. In an appropriate setting, the algorithm can lead to a random walk in the energy space, and thus it can sample relevant parts of the sample space, even in the presence of many local energy minima. The generalized algorithm can be conveniently used in many problems of Monte Carlo integration and optimization, for example, normalizing constant estimation, model selection, highest posterior density interval construction, and function optimization. Our numerical results show that the algorithm outperforms simulated annealing and parallel tempering in optimization for the system with a rough energy landscape. Some theoretical results on the convergence of the algorithm are provided.


Journal of the American Statistical Association | 2002

Dynamically Weighted Importance Sampling in Monte Carlo Computation

Faming Liang

This article describes a new Monte Carlo algorithm, dynamically weighted importance sampling (DWIS), for simulation and optimization. In DWIS, the state of the Markov chain is augmented to a population. At each iteration, the population is subject to two move steps, dynamic weighting and population control. These steps ensure that DWIS can move across energy barriers like dynamic weighting, but with the weights well controlled and with a finite expectation. The estimates can converge much faster than they can with dynamic weighting. A generalized theory for importance sampling is introduced to justify the new algorithm. Numerical examples are given to show that dynamically weighted importance sampling can perform significantly better than the Metropolis–Hastings algorithm and dynamic weighting in some situations.

Collaboration


Dive into the Faming Liang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guanghua Xiao

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Yu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ick Hoon Jin

University of Texas MD Anderson Cancer Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge