John S. J. Hsu
University of California, Santa Barbara
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John S. J. Hsu.
Journal of the American Statistical Association | 1989
Tom Leonard; John S. J. Hsu; Kam-Wah Tsui
Abstract A method is proposed for approximating the marginal posterior density of a continuous function of several unknown parameters, thus permitting inferences about any parameter of interest for nonlinear models when the sample size is finite. Possibly tedious numerical integrations are replaced by conditional maximizations, which are shown to be quite accurate in a number of special cases. There are similarities with the profile likelihood ideas originated by Kalbfleisch and Sprott (1970), and the method is contrasted with a Laplacian approximation recommended by Kass, Tierney, and Kadane (1988, in press), referred to here as the “KTK procedure.” The methods are used to approximate the marginal posterior densities of the log-linear interaction effects and an overall measure of association in a two-way contingency table. Snees (1974) hair/eye color data are reanalyzed, and adjustments are proposed to Goodmans (1964) analysis for the full-rank interaction model. Another application concerns marginaliz...
Psychometrika | 1991
John S. J. Hsu; Tom Leonard; Kam-Wah Tsui
Finite sample inference procedures are considered for analyzing the observed scores on a multiple choice test with several items, where, for example, the items are dissimilar, or the item responses are correlated. A discrete p-parameter exponential family model leads to a generalized linear model framework and, in a special case, a convenient regression of true score upon observed score. Techniques based upon the likelihood function, Akaikes information criteria (AIC), an approximate Bayesian marginalization procedure based on conditional maximization (BCM), and simulations for exact posterior densities (importance sampling) are used to facilitate finite sample investigations of the average true score, individual true scores, and various probabilities of interest. A simulation study suggests that, when the examinees come from two different populations, the exponential family can adequately generalize Duncans beta-binomial model. Extensions to regression models, the classical test theory model, and empirical Bayes estimation problems are mentioned. The Duncan, Keats, and Matsumura data sets are used to illustrate potential advantages and flexibility of the exponential family model, and the BCM technique.
Handbook on Information Technology in Finance. Ed.: D. Seese | 2008
Biliana S. Bagasheva; Svetlozar Rachev; John S. J. Hsu; Frank J. Fabozzi
There are several tasks in the investment management process. These include setting the investment objectives, establishing an investment policy, selecting a portfolio strategy, asset allocation, and measuring and evaluating performance. Bayesian methods have been either used or proposed as a tool for improving the implementation of several of these tasks. There are principal reasons for using Bayesian methods in the investment management process. First, they allow the investor to account for the uncertainty about the parameters of the return-generating process and the distributions of returns for asset classes and to incorporate prior beliefs in the decision- making process. Second, they address a deficiency of the standard statistical measures in conveying the economic significance of the information contained in the observed sample of data. Finally, they provide an analytically and computationally manageable framework in models where a large number of variables and parameters makes classical formulations a formidable challenge.
Journal of Probability and Statistics | 2014
Marick S. Sinay; John S. J. Hsu
We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.
Archive | 1996
Tom Leonard; John S. J. Hsu
It is important in many econometric and biological applications to evaluate the effective dose (ED) points in the tails of quantal response curves and the curves according to a sensible criterion. Following Geisser (1971), it seems most sensible to evaluate the posterior mean value function of the response curve, since this also gives predictive probabilities of positive responses across the design region. While plausible, it can be insufficient for small samples to base the calculations upon standard multivariate normal likelihood approximations. Exact determinations, for example, via importance sampling (see Zellner and Rossi, 1984) are needed. Extending Leonard (1982a) it is now also possible to compute the exact posterior distribution of the ED points. These are proposed as “design measures”, since they can be used to sequentially generate further design points. Related procedures (see Leonard, 1982b) yield excellent frequency properties. For example, a total of 40 observations (10 fixed in advance and 30 chosen sequentially) can assess a response curve to 6% accuracy for all design points between the ED60 and ED90 points, with an average of over 90% of the sequentially generated design points falling between the ED60 and ED90 points.
Journal of Statistical Computation and Simulation | 2002
John S. J. Hsu
Consider the logistic linear model, with some explanatory variables overlooked. Those explanatory variables may be quantitative or qualitative. In either case, the resulting true response variable is not a binomial or a beta-binomial but a sum of binomials. Hence, standard computer packages for logistic regression can be inappropriate even if an overdispersion factor is incorporated. Therefore, a discrete exponential family assumption is considered to broaden the class of sampling models. Likelihood and Bayesian analyses are discussed. Bayesian computation techniques such as Laplacian approximations and Markov chain simulations are used to compute posterior densities and moments. Approximate conditional distributions are derived and are shown to be accurate. The Markov chain simulations are performed effectively to calculate posterior moments by using the approximate conditional distributions. The methodology is applied to Keelers hardness of winter wheat data for checking binomial assumptions and to Matsumuras Accounting exams data for detailed likelihood and Bayesian analyses.
Archive | 1999
Thomas Leonard; John S. J. Hsu
Annals of Statistics | 1992
Tom Leonard; John S. J. Hsu
Journal of the American Statistical Association | 1996
Li Sun; John S. J. Hsu; Irwin Guttman; Tom Leonard
Canadian Journal of Statistics-revue Canadienne De Statistique | 1995
John S. J. Hsu