Wenxin Jiang
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wenxin Jiang.
Proceedings of the Royal Society of London B: Biological Sciences | 2007
Daniel B. Stouffer; Juan Camacho; Wenxin Jiang; Luís A. Nunes Amaral
Food webs aim to provide a thorough representation of the trophic interactions found in an ecosystem. The complexity of empirical food webs, however, is leading many ecologists to focus dynamic ecosystem studies on smaller microcosm or mesocosm studies based upon community modules, which comprise three to five species and the interactions likely to have ecological relevance. We provide here a structural counterpart to community modules. We investigate food-web ‘motifs’ which are n-species connected subgraphs found within the food web. Remarkably, we find that the over- and under-representation of three-species motifs in empirical food webs can be understood through comparison to a static food-web model, the niche model. Our result conclusively demonstrates that predation upon species with some ‘characteristic’ niche value is the prey selection mechanism consistent with the structural properties of empirical food webs.
Statistica Neerlandica | 2001
Ori Rosen; Wenxin Jiang; Gary King; Martin A. Tanner
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on RxC contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by King, Rosen and Tanner (1999) from the 2x2 case to the RxC case. As in the 2x2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.
Statistical Science | 2004
Wenxin Jiang; Bruce W. Turnbull
This paper presents an exposition and synthesis of the theory and some applications of the so-called “indirect” method of inference. These ideas have been exploited in the field of econometrics, but less so in other fields such as biostatistics and epidemiology. In the indirect method, statistical inference is based on an intermediate statistic, which typically follows an asymptotic normal distribution, but is not necessarily a consistent estimator of the parameter of interest. This intermediate statistic can be a naive estimator based on a convenient but misspecified model, a sample moment, or a solution to an estimating equation. We review a procedure of indirect inference based on generalized method of moments, which involves adjusting the naive estimator to be consistent and asymptotically normal. The objective function of this procedure is shown to be interpretable as an ‘indirect likelihood’ based on the intermediate statistic. Many properties of the ordinary likelihood function can be extended to this indirect likelihood. This method is often more convenient computationally than maximum likelihood estimation when handling such model complexities as random effects and measurement error, for example; and it can also serve as a basis for robust inference and model selection, with less stringent assumptions on the data generating mechanism. Many familiar estimation techniques can be viewed as examples of this approach. We describe applications to measurement error, omitted covariates, and recurrent events. A data set concerning prevention of mammary tumors in rats is analyzed using a Poisson regression model with overdispersion. A second data set from an epidemiological study is analyzed using a logistic regression model with mismeasured covariates. A third data set of exam scores is used to illustrate robust covariance selection in graphical models.
Neural Computation | 1999
Wenxin Jiang; Martin A. Tanner
We investigate a class of hierarchical mixtures-of-experts (HME) models where generalized linear models with nonlinear mean functions of the form ( xT) are mixed. Here () is the inverse link function. It is shown that mixtures of such mean functions can approximate a class of smooth functions of the form (h(x)), where h() W2;k (a Sobolev class over [0, 1]s, as the number of experts m in the network increases. An upper bound of the approximation rate is given as O(m2/s) in Lp norm. This rate can be achieved within the family of HME structures with no more than s-layers, where s is the dimension of the predictor x.
Journal of The Royal Statistical Society Series B-statistical Methodology | 2002
Sally Wood; Robert Kohn; Thomas S. Shively; Wenxin Jiang
A Bayesian approach is presented for model selection in nonparametric regression with Gaussian errors and in binary nonparametric regression. A smoothness prior is assumed for each component of the model and the posterior probabilities of the candidate models are approximated using the Bayesian information criterion. We study the model selection method by simulation and show that it has excellent frequentist properties and gives improved estimates of the regression surface. All the computations are carried out efficiently using the Gibbs sampler.
Journal of the American Statistical Association | 1999
Wenxin Jiang; Bruce W. Turnbull; Larry C. Clark
Abstract Statistical methodology is presented for the regression analysis of multiple events in the presence of random effects and measurement error. Omitted covariates are modeled as random effects. Our approach to parameter estimation and significance testing is to start with a naive model of semiparametric Poisson process regression, and then to adjust for random effects and any possible covariate measurement error. We illustrate the techniques with data from a randomized clinical trial for the prevention of recurrent skin tumors.
Annals of Statistics | 2010
Yuan Liao; Wenxin Jiang
This paper presents a study of the large-sample behavior of the posterior distribution of a structural parameter which is partially identified by moment inequalities. The posterior density is derived based on the limited information likelihood. The posterior distribution converges to zero exponentially fast on any δ -contraction outside the identified region. Inside, it is bounded below by a positive constant if the identified region is assumed to have a nonempty interior. Our simulation evidence indicates that the Bayesian approach has advantages over frequentist methods, in the sense that, with a proper choice of the prior, the posterior provides more information about the true parameter inside the identified region.We also address the problem of moment and model selection. Our optimality criterion is the maximum posterior procedure and we show that, asymptotically, it selects the true moment/model combination with the most moment inequalities and the simplest model.
Annals of Statistics | 2011
Yuan Liao; Wenxin Jiang
This paper addresses the estimation of the nonparametric conditional moment restricted model that involves an infinite-dimensional parameter g0. We estimate it in a quasi-Bayesian way, based on the limited information likelihood, and investigate the impact of three types of priors on the posterior consistency: (i) truncated prior (priors supported on a bounded set), (ii) thin-tail prior (a prior that has very thin tail outside a growing bounded set) and (iii) normal prior with nonshrinking variance. In addition, g0 is allowed to be only partially identified in the frequentist sense, and the parameter space does not need to be compact. The posterior is regularized using a slowly growing sieve dimension, and it is shown that the posterior converges to any small neighborhood of the identified region. We then apply our results to the nonparametric instrumental regression model. Finally, the posterior consistency using a random sieve dimension parameter is studied.
Neural Computation | 2006
Yang Ge; Wenxin Jiang
This is a theoretical study of the consistency properties of Bayesian inference using mixtures of logistic regression models. When standard logistic regression models are combined in a mixtures-of-experts setup, a flexible model is formed to model the relationship between a binary (yes-no) response y and a vector of predictors x. Bayesian inference conditional on the observed data can then be used for regression and classification. This letter gives conditions on choosing the number of experts (i.e., number of mixing components) k or choosing a prior distribution for k, so that Bayesian inference is consistent, in the sense of often approximating the underlying true relationship between y and x. The resulting classification rule is also consistent, in the sense of having near-optimal performance in classification. We show these desirable consistency properties with a nonstochastic k growing slowly with the sample size n of the observed data, or with a random k that takes large values with nonzero but small probabilities.
Statistics in Medicine | 1997
Bruce W. Turnbull; Wenxin Jiang; Larry C. Clark
Statistical methodology is presented for the statistical analysis of non-linear measurement error models. Our approach is to provide adjustments for the usual maximum likelihood estimators, their standard errors and associated significance tests in order to account for the presence of measurement error in some of the covariates. We illustrate the technique with a mixed effects Poisson regression model for recurrent event data applied to a randomized clinical trial for the prevention of skin tumours.