Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jin Wang is active.

Publication


Featured researches published by Jin Wang.


winter simulation conference | 2001

Generating daily changes in market variables using a multivariate mixture of normal distributions

Jin Wang

The mixture of normal distributions provides a useful extension of the normal distribution for modeling of daily changes in market variables with fatter-than-normal tails and skewness. An efficient analytical Monte Carlo method is proposed for generating daily changes using a multivariate mixture of normal distributions with arbitrary covariance matrix. The main purpose of this method is to transform (linearly) a multivariate normal with an input covariance matrix into the desired multivariate mixture of normal distributions. This input covariance matrix can be derived analytically. Any linear combination of mixtures of normal distributions can be shown to be a mixture of normal distributions.


Iie Transactions | 2001

Biased control-variate estimation

Bruce W. Schmeiser; Michael R. Taaffe; Jin Wang

We study Biased Control Variates (BCVs), whose purpose is to improve the efficiency of stochastic simulation experiments. BCVs replace the control-simulation mean with an approximation; the resulting control-variate estimator is biased. This bias may not be a significant issue for finite sample sizes, however, because our estimator minimizes the more general mean-squared-error (mse), i.e., the sum of the estimator variance plus the bias squared. After discussing an example, we review BCVs, including the mse optimal control-variate weight and associated mse performance. We then consider the relationships among bias, induced correlation, relative mse reduction, computing effort and generalized mse (gmse), assuming the use of the mse-optimal control weight, both for cases with and without bias. We define and study two estimators for the optimal control-variate weight: the Natural Estimator and the Classical Estimator. The Classical Estimator, which simply ignores the bias, can yield substantial mse reduction when the error in the approximation is small compared to the sampling error of the control-simulation.


Iie Transactions | 2012

Control-variate estimation using estimated control means

Raghu Pasupathy; Bruce W. Schmeiser; Michael R. Taaffe; Jin Wang

This article studies control-variate estimation where the control mean itself is estimated. Control-variate estimation in simulation experiments can significantly increase sampling efficiency and has traditionally been restricted to cases where the control has a known mean. In a previous paper the current authors generalized the idea of control variate estimation to the case where the control mean is only approximated. The result is a biased but possibly useful estimator. For that case, a mean square error optimal estimator was provided and its properties were discussed. This article generalizes classical control variate estimation to the case of Control Variates using Estimated Means (CVEMs). CVEMs replace the control mean with an estimated value for the control mean obtained from a prior simulation experiment. Although the resulting control-variate estimator is unbiased, it does introduce additional sampling error and so its properties are not the same as those of the standard control-variate estimator. A CVEM estimator is developed that minimizes the overall estimator variance. Both biased control variates and CVEMs can be used to improve the efficiency of stochastic simulation experiments. Their main appeal is that the restriction of having to know (deterministically) the exact value of the control mean is eliminated; thus, the space of possible controls is greatly increased.


Operations Research Letters | 1997

Approximation-assisted point estimation

Barry L. Nelson; Bruce W. Schmeiser; Michael R. Taaffe; Jin Wang

We investigate three alternatives for combining a deterministic approximation with a stochastic simulation estimator: (1) binary choice, (2) linear combination, and (3) Bayesian analysis. Making a binary choice, based on compatibility of the simulation estimator with the approximation, provides at best a 20% improvement in simulation efficiency. More effective is taking a linear combination of the approximation and the simulation estimator using weights estimated from the simulation data, which provides at best a 50% improvement in simulation efficiency. The Bayesian analysis yields a linear combination with weights that are a function of the simulation data and the prior distribution on the approximation error; the efficiency depends upon the quality of the prior distribution.


winter simulation conference | 2006

Generating multivariate mixture of normal distributions using a modified Cholesky decomposition

Jin Wang; Chunlei Liu

Mixture of normals is a more general and flexible distribution for modeling of daily changes in market variables with fat tails and skewness. An efficient analytical Monte Carlo method was proposed by Wang and Taaffe for generating daily changes using a multivariate mixture of normal distributions with arbitrary covariance matrix. However the usual Cholesky decomposition will fail if the covariance matrix is not positive definite. In practice, the covariance matrix is unknown and has to be estimated. The estimated covariance may be not positive definite. We propose a modified Cholesky decomposition for semi-definite matrices and also suggest an optimal semi-definite approximation for indefinite matrices


winter simulation conference | 1997

Weighted jackknife-after-bootstrap: a heuristic approach

Jin Wang; J.S. Rao; Jun Shao

We investigate the problem of deriving precision estimates for bootstrap quantities. The one major stipulation is that no further bootstrapping will be allowed. In 1992, Efron derived the method of jackknife-after-bootstrap (JAB) and showed how this problem can potentially be solved. However, the applicability of JAB was questioned in situations where the number of bootstrap samples was not large. The JAB estimates were inflated and performed poorly. We provide a simple correction to the JAB method using a weighted form where the weights are derived from the original bootstrap samples. Our Monte Carlo experiments show that the weighted jackknifeafter-bootstrap (WJAB) performs very well.


Informs Journal on Computing | 2015

Multivariate Mixtures of Normal Distributions: Properties, Random Vector Generation, Fitting, and as Models of Market Daily Changes

Jin Wang; Michael R. Taaffe

Mixtures of normal distributions provide a useful modeling extension of the normal distribution—both univariate and multivariate. Unlike the normal distribution, mixtures of normals can capture the kurtosis (fat tails) and nonzero skewness often necessary for accurately modeling a variety of real-world variables. An efficient analytical Monte Carlo method is proposed for considering multivariate mixtures of normal distributions having arbitrary covariance matrices. The method consists of a linear transformation of a multivariate normal having a computed covariance matrix into the desired multivariate mixture of normal distributions. The computed covariance matrix is derived analytically. Among the properties of the multivariate mixture of normals that we demonstrate is that any linear combination of mixtures of normal distributions is also a mixture of normal distributions. Methods of fitting mixtures of normal distributions are briefly discussed. A motivating example carried throughout this paper is the use of multivariate mixtures of normals for modeling daily changes in market variables.


winter simulation conference | 1993

Monte Carlo estimation of Bayesian robustness

Jin Wang; Bruce W. Schmeiser

Bayesian estimation procedures often require Monte Carlo integration with respect to the posterior distribution. We propose a Monte Carlo estimator of an arbitrary posterior-distribution property, as well as its gradient with respect to prior-distribution hy-perparameters and to the observed data. Unlike most Monte Carlo samplers for Bayesian problems, we sample from the prior distribution, which is usually more tractable than the posterior distribution. We discuss sufficient conditions for interchanging expected value and differentiation, so that the gradient can be estimated by averaging observations of the stochastic gradient. In addition to the gradient estimator, we suggest asymptotically valid standard error and confidence-interval estimators. We give two numerical examples.


winter simulation conference | 1995

On the performance of pure adaptive search

Bruce W. Schmeiser; Jin Wang

Studies the pure adaptive search (PAS), an iterative optimization algorithm whose next solution is chosen to be uniformly distributed over the set of feasible solutions that are no worse than the current solution. We extend the results of Patel, Smith and Zabinsky (1988) and Zabinsky and Smith (1992). In particular, we (1) show that PAS converges to the optimal solution almost certainly, (2) show that each PAS iteration reduces the expected remaining feasible-region volume by 50%, and (3) improve the Patel, Smith and Zabinsky complexity measure for convex problems.


winter simulation conference | 2014

Removing the inherent paradox of the Buffon's needle monte carlo simulation using fixed-point iteration method

Maximilian J. Wang; Jin Wang

In teaching simulation, the Buffons needle is a popular experiment to use for designing a Monte Carlo simulation to approximate the number π. Simulating the Buffons needle experiment is a perfect example for demonstrating the beauty of a Monte Carlo simulation in a classroom. However, there is a common misconception concerning the Buffons needle simulation. Erroneously, the simulation of the needle drop cannot be used to evaluate π. We have to simulate the needles angle from an uniform (0, π over 2) distribution. It is self-referential in theory, since it requires the number π as the input value to approximate π. In this study, we propose a new method using the fixed-point iteration to remove the inherent paradox of the Buffons needle simulation. A new algorithm with Python implementation is proposed. The simulation outputs indicate that our new method is as good as if we use the true π value as an input.

Collaboration


Dive into the Jin Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunlei Liu

Valdosta State University

View shared research outputs
Top Co-Authors

Avatar

Jun Shao

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge