Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianxin Pan is active.

Publication


Featured researches published by Jianxin Pan.


Journal of the American Statistical Association | 2010

Semiparametric Mean–Covariance Regression Analysis for Longitudinal Data

Chenlei Leng; Weiping Zhang; Jianxin Pan

Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. Existing approaches usually focus on modeling the mean with specification of certain covariance structures, which may lead to inefficient or biased estimators of parameters in the mean if misspecification occurs. In this article, we propose a data-driven approach based on semiparametric regression models for the mean and the covariance simultaneously, motivated by the modified Cholesky decomposition. A regression spline-based approach using generalized estimating equations is developed to estimate the parameters in the mean and the covariance. The resulting estimators for the regression coefficients in both the mean and the covariance are shown to be consistent and asymptotically normally distributed. In addition, the nonparametric functions in these two structures are estimated at their optimal rate of convergence. Simulation studies and a real data analysis show that the proposed approach yields highly efficient estimators for the parameters in the mean, and provides parsimonious estimation for the covariance structure. Supplemental materials for the article are available online.


Lifetime Data Analysis | 2011

A general joint model for longitudinal measurements and competing risks survival data with heterogeneous random effects

Xin Huang; Gang Li; Robert Elashoff; Jianxin Pan

This article studies a general joint model for longitudinal measurements and competing risks survival data. The model consists of a linear mixed effects sub-model for the longitudinal outcome, a proportional cause-specific hazards frailty sub-model for the competing risks survival data, and a regression sub-model for the variance–covariance matrix of the multivariate latent random effects based on a modified Cholesky decomposition. The model provides a useful approach to adjust for non-ignorable missing data due to dropout for the longitudinal outcome, enables analysis of the survival outcome with informative censoring and intermittently measured time-dependent covariates, as well as joint analysis of the longitudinal and survival outcomes. Unlike previously studied joint models, our model allows for heterogeneous random covariance matrices. It also offers a framework to assess the homogeneous covariance assumption of existing joint models. A Bayesian MCMC procedure is developed for parameter estimation and inference. Its performances and frequentist properties are investigated using simulations. A real data example is used to illustrate the usefulness of the approach.


Statistical Modelling | 2006

Regression models for covariance structures in longitudinal studies

Jianxin Pan; Gilbert MacKenzie

A convenient reparametrization of the marginal covariance matrix arising in longitudinal studies is discussed. The new parameters have transparent statistical interpretations, are unconstrained and may be modelled parsimoniously in terms of polynomials of time. We exploit this framework to model the dependence of the covariance structure on baseline covariates, time and their interaction. The rationale is based on the assumption that a homogeneous covariance structure with respect to the covariate space is a testable model choice. Accordingly, we provide methods for testing this assumption by incorporating covariates along with time into the model for the covariance structure. We also present new computational algorithms which can handle unbalanced longitudinal data, thereby extending existing methods. The new model is used to analyse Kenward’s (1987) cattle data, and the findings are compared with published analyses of the same data set.


Computational Statistics & Data Analysis | 2007

Quasi-Monte Carlo estimation in generalized linear mixed models

Jianxin Pan; R. Thompson

Generalized linear mixed models (GLMMs) are useful for modelling longitudinal and clustered data, but parameter estimation is very challenging because the likelihood may involve high-dimensional integrals that are analytically intractable. Gauss-Hermite quadrature (GHQ) approximation can be applied but is only suitable for low-dimensional random effects. Based on the Quasi-Monte Carlo (QMC) approximation, a heuristic approach is proposed to calculate the maximum likelihood estimates of parameters in the GLMM. The QMC points scattered uniformly on the high-dimensional integration domain are generated to replace the GHQ nodes. Compared to the GHQ approximation, the proposed method has many advantages such as its affordable computation, good approximation and fast convergence. Comparisons to the penalized quasi-likelihood estimation and Gibbs sampling are made using a real dataset and a simulation study. The real dataset is the salamander mating dataset whose modelling involves six 20-dimensional intractable integrals in the likelihood.


Journal of Hypertension | 2012

The predictive ability of blood pressure in elderly trial patients

Matthew Carr; Yanchun Bao; Jianxin Pan; Kennedy Cruickshank; Roseanne McNamee

Objectives: To assess the impact of the blood pressure (BP) profile on cardiovascular risk in the Medical Research Council (UK) elderly trial; investigate whether the effects of hypertensive drugs in reducing event rates are solely a product of systolic pressure reduction. Methods: Using longitudinal BP data from 4396 hypertensive patients, the general trend over time was estimated using a first-stage multilevel model. We then investigated how BP acted alongside other BP-related covariates in a second-stage ‘time-to-event’ statistical model, assessing risk for stroke events and coronary heart disease (CHD). Differences in outcome prediction between diuretic, &bgr;-blocker and placebo treatment arms were investigated. Results: The &bgr;-blocker arm experienced comparatively poor control of current SBP, episodic peaks and variability in BP levels. After adjusting for the mean level, variability in SBP over time was significant: risk ratio was 1.15 [95% confidence interval (CI): 1.01–1.31] across all patients for stroke events. The risk ratio for current SBP was 1.36 (95% CI: 1.16–1.58). Current DBP and variability in DBP also predicted stroke independently: risk ratios was 1.43 and 1.18, respectively. The risk factors exhibited weaker associations with CHD risk; only the highest measured value and variability in SBP showed a statistically significant association: risk ratios were 1.26 and 1.16, respectively. Conclusion: Individual risk characterization could be augmented with additional prognostic information, besides current SBP, including current diastolic pressure, temporal variability over and above general trends and historical measurements.


Technometrics | 2014

Case-Deletion Diagnostics for Linear Mixed Models

Jianxin Pan; Yu Fei; Peter J. Foster

Based on the Q-function, the conditional expectation of the logarithm of the joint-likelihood between responses and random effects, we propose a case-deletion approach to identify influential subjects and influential observations in linear mixed models. The models considered here are very broad in the sense that any covariance structures can be specified in the covariance matrices of the random effects and random errors. Analytically explicit forms of diagnostic measures for the fixed effects and variance components are provided. Comparisons with existing methods, including likelihood-based case-deletion and local influence methods, are made. Numerical results, including real data analysis and simulation studies, are presented for both illustration and comparison. This article has supplementary material online.


Journal of Computational and Graphical Statistics | 2014

Variable Selection in General Frailty Models using Penalized H-likelihood

Il Do Ha; Jianxin Pan; Seung-Young Oh; Youngjo Lee

Variable selection methods using a penalized likelihood have been widely studied in various statistical models. However, in semiparametric frailty models, these methods have been relatively less studied because the marginal likelihood function involves analytically intractable integrals, particularly when modeling multicomponent or correlated frailties. In this article, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of semiparametric frailty models, in which random effects may be shared, nested, or correlated. We consider three penalty functions (least absolute shrinkage and selection operator [LASSO], smoothly clipped absolute deviation [SCAD], and HL) in our variable selection procedure. We show that the proposed method can be easily implemented via a slight modification to existing HL estimation approaches. Simulation studies also show that the procedure using the SCAD or HL penalty performs well. The usefulness of the new method is illustrated using three practical datasets too. Supplementary materials for the article are available online.


Journal of biometrics & biostatistics | 2011

Prior Elicitation in Bayesian Quantile Regression for Longitudinal Data

Rahim Alhamzawi; Keming Yu; Jianxin Pan

In this paper, we introduce Bayesian quantile regression for longitudinal data in terms of informative priors and Gibbs sampling. We develop methods for eliciting prior distribution to incorporate historical data gathered from similar previous studies. The methods can be used either with no prior data or with complete prior data. The advantage of the methods is that the prior distribution is changing automatically when we change the quantile. We propose Gibbs sampling methods which are computationally efficient and easy to implement. The methods are illustrated with both simulation and real data.


Statistical Methods in Medical Research | 2016

Bayesian inference for joint modelling of longitudinal continuous, binary and ordinal events.

Qiuju Li; Jianxin Pan; John Belcher

In medical studies, repeated measurements of continuous, binary and ordinal outcomes are routinely collected from the same patient. Instead of modelling each outcome separately, in this study we propose to jointly model the trivariate longitudinal responses, so as to take account of the inherent association between the different outcomes and thus improve statistical inferences. This work is motivated by a large cohort study in the North West of England, involving trivariate responses from each patient: Body Mass Index, Depression (Yes/No) ascertained with cut-off score not less than 8 at the Hospital Anxiety and Depression Scale, and Pain Interference generated from the Medical Outcomes Study 36-item short-form health survey with values returned on an ordinal scale 1–5. There are some well-established methods for combined continuous and binary, or even continuous and ordinal responses, but little work was done on the joint analysis of continuous, binary and ordinal responses. We propose conditional joint random-effects models, which take into account the inherent association between the continuous, binary and ordinal outcomes. Bayesian analysis methods are used to make statistical inferences. Simulation studies show that, by jointly modelling the trivariate outcomes, standard deviations of the estimates of parameters in the models are smaller and much more stable, leading to more efficient parameter estimates and reliable statistical inferences. In the real data analysis, the proposed joint analysis yields a much smaller deviance information criterion value than the separate analysis, and shows other good statistical properties too.


Statistics in Medicine | 2014

Joint longitudinal and survival-cure models in tumour xenograft experiments.

Jianxin Pan; Yanchun Bao; Hongsheng Dai; Hong-Bin Fang

In tumour xenograft experiments, treatment regimens are administered, and the tumour volume of each individual is measured repeatedly over time. Survival data are recorded because of the death of some individuals during the observation period. Also, cure data are observed because of a portion of individuals who are completely cured in the experiments. When modelling these data, certain constraints have to be imposed on the parameters in the models to account for the intrinsic growth of the tumour in the absence of treatment. Also, the likely inherent association of longitudinal and survival-cure data has to be taken into account in order to obtain unbiased estimators of parameters. In this paper, we propose such models for the joint modelling of longitudinal and survival-cure data arising in xenograft experiments. Estimators of parameters in the joint models are obtained using a Markov chain Monte Carlo approach. Real data analysis of a xenograft experiment is carried out, and simulation studies are also conducted, showing that the proposed joint modelling approach outperforms the separate modelling methods in the sense of mean squared errors.

Collaboration


Dive into the Jianxin Pan's collaboration.

Top Co-Authors

Avatar

Hua Wang

Kunming University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jianxin Xu

Kunming University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Qingtai Xiao

Kunming University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yu Fei

Yunnan University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kun Liu

King's College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge