Joshua C. C. Chan
Australian National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joshua C. C. Chan.
International Journal of Mathematical Modelling and Numerical Optimisation | 2009
Joshua C. C. Chan; Ivan Jeliazkov
We consider the problem of implementing simple and efficient Markov chain Monte Carlo (MCMC) estimation algorithms for state space models. A conceptually transparent derivation of the posterior distribution of the states is discussed, which also leads to an efficient simulation algorithm that is modular, scalable and widely applicable. We also discuss a simple approach for evaluating the integrated likelihood, defined as the density of the data given the parameters but marginal of the state vector. We show that this high-dimensional integral can be easily evaluated with minimal computational and conceptual difficulty. Two empirical applications in macroeconomics demonstrate that the methods are versatile and computationally undemanding. In one application, involving a time-varying parameter model, we show that the methods allow for efficient handling of large state vectors. In our second application, involving a dynamic factor model, we introduce a new blocking strategy which results in improved MCMC mixing at little cost. The results demonstrate that the framework is simple, flexible and efficient.
European Journal of Operational Research | 2010
Joshua C. C. Chan; Dirk P. Kroese
We consider the problem of accurately measuring the credit risk of a portfolio consisting of loans, bonds and other financial assets. One particular performance measure of interest is the probability of large portfolio losses over a fixed time horizon. We revisit the so-called t-copula that generalizes the popular normal copula to allow for extremal dependence among defaults. By utilizing the asymptotic description of how the rare event occurs, we derive two simple simulation algorithms based on conditional Monte Carlo to estimate the probability that the portfolio incurs large losses under the t-copula. We further show that the less efficient estimator exhibits bounded relative error. An extensive simulation study demonstrates that both estimators outperform existing algorithms. We then discuss a generalization of the t-copula model that allows the multivariate defaults to have an asymmetric distribution. Lastly, we show how the estimators proposed for the t-copula can be modified to estimate the portfolio risk under the skew t-copula model.
Journal of Business & Economic Statistics | 2012
Joshua C. C. Chan; Gary Koop; Roberto Leon-Gonzalez; Rodney W. Strachan
Time varying parameter (TVP) models have enjoyed an increasing popularity in empirical macroeconomics. However, TVP models are parameter-rich and risk over-fitting unless the dimension of the model is small. Motivated by this worry, this article proposes several Time Varying Dimension (TVD) models where the dimension of the model can change over time, allowing for the model to automatically choose a more parsimonious TVP representation, or to switch between different parsimonious representations. Our TVD models all fall in the category of dynamic mixture models. We discuss the properties of these models and present methods for Bayesian inference. An application involving U.S. inflation forecasting illustrates and compares the different TVD models. We find our TVD approaches exhibit better forecasting performance than many standard benchmarks and shrink toward parsimonious specifications. This article has online supplementary materials.
Statistics and Computing | 2012
Joshua C. C. Chan; Dirk P. Kroese
The cross-entropy (CE) method is an adaptive importance sampling procedure that has been successfully applied to a diverse range of complicated simulation problems. However, recent research has shown that in some high-dimensional settings, the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. We consider a variation of the CE method whose performance does not deteriorate as the dimension of the problem increases. We then illustrate the algorithm via a high-dimensional estimation problem in risk management.
Journal of Business & Economic Statistics | 2013
Joshua C. C. Chan; Gary Koop; Simon M. Potter
This article introduces a new model of trend inflation. In contrast to many earlier approaches, which allow for trend inflation to evolve according to a random walk, ours is a bounded model which ensures that trend inflation is constrained to lie in an interval. The bounds of this interval can either be fixed or estimated from the data. Our model also allows for a time-varying degree of persistence in the transitory component of inflation. In an empirical exercise with CPI inflation, we find the model to work well, yielding more sensible measures of trend inflation and forecasting better than popular alternatives such as the unobserved components stochastic volatility model. This article has supplementary materials online.
Econometric Reviews | 2015
Joshua C. C. Chan; Eric Eisenstat
We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is grounded in information theory, and therefore, is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving womens labor market participation and U.S. macroeconomic time series. In both applications, the proposed CE method compares favorably to existing estimators.
Journal of Business & Economic Statistics | 2017
Joshua C. C. Chan
This article generalizes the popular stochastic volatility in mean model to allow for time-varying parameters in the conditional mean. The estimation of this extension is nontrival since the volatility appears in both the conditional mean and the conditional variance, and its coefficient in the former is time-varying. We develop an efficient Markov chain Monte Carlo algorithm based on band and sparse matrix algorithms instead of the Kalman filter to estimate this more general variant. The methodology is illustrated with an application that involves U.S., U.K., and Germany inflation. The estimation results show substantial time-variation in the coefficient associated with the volatility, highlighting the empirical relevance of the proposed extension. Moreover, in a pseudo out-of-sample forecasting exercise, the proposed variant also forecasts better than various standard benchmarks.
Annals of Operations Research | 2011
Joshua C. C. Chan; Dirk P. Kroese
Estimation of rare-event probabilities in high-dimensional settings via importance sampling is a difficult problem due to the degeneracy of the likelihood ratio. In fact, it is generally recommended that Monte Carlo estimators involving likelihood ratios should not be used in such settings. In view of this, we develop efficient algorithms based on conditional Monte Carlo to estimate rare-event probabilities in situations where the degeneracy problem is expected to be severe. By utilizing an asymptotic description of how the rare event occurs, we derive algorithms that involve generating random variables only from the nominal distributions, thus avoiding any likelihood ratio. We consider two settings that occur frequently in applied probability: systems involving bottleneck elements and models involving heavy-tailed random variables. We first consider the problem of estimating ℙ(X1+⋅⋅⋅+Xn>γ), where X1,…,Xn are independent but not identically distributed (ind) heavy-tailed random variables. Guided by insights obtained from this model, we then study a variety of more general settings. Specifically, we consider a complex bridge network and a generalization of the widely popular normal copula model used in managing portfolio credit risk, both of which involve hundreds of random variables. We show that the same conditioning idea, guided by an asymptotic description of the way in which the rare event happens, can be used to derive estimators that outperform existing ones.
Computational Statistics & Data Analysis | 2016
Joshua C. C. Chan; Angelia L. Grant
The deviance information criterion (DIC) has been widely used for Bayesian model comparison. However, recent studies have cautioned against the use of certain variants of the DIC for comparing latent variable models. For example, it has been argued that the conditional DIC-based on the conditional likelihood obtained by conditioning on the latent variables-is sensitive to transformations of latent variables and distributions. Further, in a Monte Carlo study that compares various Poisson models, the conditional DIC almost always prefers an incorrect model. In contrast, the observed-data DIC-calculated using the observed-data likelihood obtained by integrating out the latent variables-seems to perform well. It is also the case that the conditional DIC based on the maximum a posteriori (MAP) estimate might not even exist, whereas the observed-data DIC does not suffer from this problem. In view of these considerations, fast algorithms for computing the observed-data DIC for a variety of high-dimensional latent variable models are developed. Through three empirical applications it is demonstrated that the observed-data DICs have much smaller numerical standard errors compared to the conditional DICs. The corresponding Matlab code is available upon request.
Journal of Computational and Graphical Statistics | 2009
Joshua C. C. Chan; Ivan Jeliazkov
This article is motivated by the difficulty of applying standard simulation techniques when identification constraints or theoretical considerations induce covariance restrictions in multivariate models. To deal with this difficulty, we build upon a decomposition of positive definite matrices and show that it leads to straightforward Markov chain Monte Carlo samplers for restricted covariance matrices. We introduce the approach by reviewing results for multivariate Gaussian models without restrictions, where standard conjugate priors on the elements of the decomposition induce the usual Wishart distribution on the precision matrix and vice versa. The unrestricted case provides guidance for constructing efficient Metropolis–Hastings and accept-reject Metropolis–Hastings samplers in more complex settings, and we describe in detail how simulation can be performed under several important constraints. The proposed approach is illustrated in a simulation study and two applications in economics. Supplemental materials for this article (appendixes, data, and computer code) are available online.