Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiliang Ying is active.

Publication


Featured researches published by Zhiliang Ying.


Journal of The Royal Statistical Society Series B-statistical Methodology | 2000

Semiparametric regression for the mean and rate functions of recurrent events

D. Y. Lin; L. J. Wei; I. Yang; Zhiliang Ying

The counting process with the Cox‐type intensity function has been commonly used to analyse recurrent event data. This model essentially assumes that the underlying counting process is a time‐transformed Poisson process and that the covariates have multiplicative effects on the mean and rate function of the counting process. Recently, Pepe and Cai, and Lawless and co‐workers have proposed semiparametric procedures for making inferences about the mean and rate function of the counting process without the Poisson‐type assumption. In this paper, we provide a rigorous justification of such robust procedures through modern empirical process theory. Furthermore, we present an approach to constructing simultaneous confidence bands for the mean function and describe a class of graphical and numerical techniques for checking the adequacy of the fitted mean–rate model. The advantages of the robust procedures are demonstrated through simulation studies. An illustration with multiple‐infection data taken from a clinical study on chronic granulomatous disease is also provided.


Applied Psychological Measurement | 1996

A Global Information Approach to Computerized Adaptive Testing

Hua Hua Chang; Zhiliang Ying

Most item selection in computerized adaptive testing is based on Fisher information (or item information). At each stage, an item is selected to maximize the Fisher information at the currently estimated trait level (θ). However, this application of Fisher information could be much less efficient than assumed if the estimators are not close to the true θ, especially at early stages of an adaptive test when the test length (number of items) is too short to provide an accurate estimate for true θ. It is argued here that selection procedures based on global information should be used, at least at early stages of a test when θ estimates are not likely to be close to the true θ. For this purpose, an item selection procedure based on average global information is proposed. Re sults from pilot simulation studies comparing the usual maximum item information item selection with the pro posed global information approach are reported, indicat ing that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances. Index terms: computerized adaptive testing, Fisher information, global information, infor mation surface, item information, item response theory, Kullback-Leibler information, local information, test in formation.


Journal of the American Statistical Association | 1995

Survival analysis with median regression models

Zhiliang Ying; Sin-Ho Jung; L. J. Wei

Abstract The median is a simple and meaningful measure for the center of a long-tailed survival distribution. To examine the covariate effects on survival, a natural alternative to the usual mean regression model is to regress the median of the failure time variable or a transformation thereof on the covariates. In this article we propose semiparametric procedures to make inferences for such median regression models with possibly censored observations. Our proposals can be implemented efficiently using a simulated annealing algorithm. Numerical studies are conducted to show the advantage of the new procedures over some recently developed methods for the accelerated failure time model, a special type of mean regression models in the survival analysis. The proposals discussed in the article are illustrated with a lung cancer data set.


Journal of the American Statistical Association | 1993

Cox regression with incomplete covariate measurements

D. Y. Lin; Zhiliang Ying

Abstract This article provides a general solution to the problem of missing covariate data under the Cox regression model. The estimating function for the vector of regression parameters is an approximation to the partial likelihood score function with full covariate measurements and reduces to the pseudolikelihood score function of Self and Prentice in the special setting of case-cohort designs. The resulting parameter estimator is consistent and asymptotically normal with a covariance matrix for which a simple and consistent estimator is provided. Extensive simulation studies show that the large-sample approximations are adequate for practical use. The proposed approach tends to be more efficient than the complete-case analysis, especially for large cohorts with infrequent failures. For case-cohort designs, the new methodology offers a variance-covariance estimator that is much easier to calculate than the existing ones and allows multiple subcohort augmentations to improve efficiency. Real data taken f...


Applied Psychological Measurement | 1999

a-Stratified Multistage Computerized Adaptive Testing:

Hua Hua Chang; Zhiliang Ying

Computerized adaptive tests (CAT) commonly use item selection methods that select the item which provides maximum information at an examinees estimated trait level. However, these methods can yield extremely skewed item exposure distributions. For tests based on the three-parameter logistic model, it was found that administering items with low discrimination parameter (a) values early in the test and administering those with high a values later was advantageous; the skewness of item exposure distributions was reduced while efficiency was maintained in trait level estimation. Thus, a new multistage adaptive testing approach is proposed that factors a into the item selection process. In this approach, the items in the item bank are stratified into a number of levels based on their a values. The early stages ofa test use items with lower as and later stages use items with higher as. At each stage, items are selected according to an optimization criterion from the corresponding level. Simulation studies were performed to compare a-stratified CATs with CATs based on the Sympson-Hetter method for controlling item exposure. Results indicated that this new strategy led to tests that were well-balanced, with respect to item exposure, and efficient. The a-stratifiedCATs achieved a lower average exposure rate than CATs based on Bayesian or information-based item selection and the Sympson-Hetter method.


Journal of the American Statistical Association | 2001

Semiparametric and Nonparametric Regression Analysis of Longitudinal Data

D. Y. Lin; Zhiliang Ying

This article deals with the regression analysis of repeated measurements taken at irregular and possibly subject-specific time points. The proposed semiparametric and nonparametric models postulate that the marginal distribution for the repeatedly measured response variable Y at time t is related to the vector of possibly time-varying covariates X through the equations E{Y(t)|| X(t} = α0(t) + β′0X(t) and E{Y(t)||X(t)} = α0(t)+ β′0(t)X(t), where α0(t) is an arbitrary function of t, β0 is a vector of constant regression coefficients, and β0(t) is a vector of time-varying regression coefficients. The stochastic structure of the process Y(·) is completely unspecified. We develop a class of least squares type estimators for β0, which is proven to be n½-consistent and asymptotically normal with simple variance estimators. Furthermore, we develop a closed-form estimator for a cumulative function of β0(t), which is shown to be n½-consistent and, on proper normalization, converges weakly to a zero-mean Gaussian process with an easily estimated covariance function. Extensive simulation studies demonstrate that the asymptotic approximations are accurate for moderate sample sizes and that the efficiencies of the proposed semiparametric estimators are high relative to their parametric counterparts. An illustration with longitudinal CD4 cell count data taken from an HIV/AIDS clinical trial is provided.


Journal of Clinical Oncology | 2006

New MUC1 Serum Immunoassay Differentiates Pancreatic Cancer From Pancreatitis

David V. Gold; David E. Modrak; Zhiliang Ying; Thomas M. Cardillo; Robert M. Sharkey; David M. Goldenberg

PURPOSE To evaluate a new immunoassay for identification and quantitation of MUC1 in the sera of patients with pancreatic cancer or pancreatitis. The sensitivity and specificity of the assay are examined and compared to results from a CA19-9 immunoassay. METHODS An in vitro enzyme immunoassay was established with monoclonal antibody PAM4 as the capture reagent, and a polyclonal anti-MUC1 antibody as the probe. Patient sera were obtained from healthy, adult patients with acute and chronic pancreatitis, and those with pancreatic and other forms of cancer, and were measured for PAM4-reactive MUC1. RESULTS At a cutoff of 10.2 units/mL, 41 (77%) of 53 pancreatic cancer patients, none of the healthy individuals (n = 43), and only four (5%) of 87 patients with pancreatitis were positive above this value. Among nonpancreatic cancers investigated, colorectal cancers gave the highest percentage of positives (14%; five of 36). Overall, the sensitivity and specificity of the immunoassay for pancreatic cancer were 77% and 95%, respectively. Receiver operator characteristic analyses for discrimination of pancreatic cancer from pancreatitis provided an area under the curve of 0.89 (95% CI, 0.82 to 0.93), with a specificity of 95.4% and a positive likelihood ratio of 16.8. A direct pair-wise comparison of PAM4 and CA19-9 immunoassays for discrimination of pancreatic cancer and pancreatitis resulted in a significant difference (P < .003), with the PAM4 immunoassay demonstrating superior sensitivity and specificity. CONCLUSION The high sensitivity and specificity observed suggest that the PAM4-based immunoassay of circulating MUC1 may be useful in the diagnosis of pancreatic cancer.


Journal of the American Statistical Association | 1997

Predicting Survival Probabilities with Semiparametric Transformation Models

S. C. Cheng; L. J. Wei; Zhiliang Ying

Abstract Prediction of survival probabilities for future patients is one of the main goals of fitting survival data with regression models. In this article we consider a large class of semiparametric transformation models, which includes the well-known proportional hazards and proportional odds models, for the analysis of failure time data. Specifically, we propose pointwise and simultaneous confidence interval procedures for the survival probability of future patients with specific covariates. These procedures can be easily implemented through simulation and are illustrated with the data from two well-known clinical studies.


Applied Psychological Measurement | 2001

a-Stratified Multistage Computerized Adaptive Testing with b Blocking

Hua Hua Chang; Jiahe Qian; Zhiliang Ying

Chang & Ying’s (1999) computerized adaptive testing item-selection procedure stratifies the item bank according to a parameter values and requires b parameter values to be evenly distributed across all strata. Thus, a and b parameter values must be incorporated into how strata are formed. A refinement is proposed, based on Weiss’ (1973) stratification of items according to b values. Simulation studies using a retired item bank of a Graduate Record Examination test indicate that the new approach improved control of item exposure rates and reduced mean squared errors.


Journal of Econometrics | 2000

Simple resampling methods for censored regression quantiles

Yannis Bilias; Songnian Chen; Zhiliang Ying

Abstract Powell (Journal of Econometrics 25 (1984) 303–325; Journal of Econometrics 32 (1986) 143–155) considered censored regression quantile estimators. The asymptotic covariance matrices of his estimators depend on the error densities and are therefore difficult to estimate reliably. The difficulty may be avoided by applying the bootstrap method (Hahn, Econometric Theory 11 (1995) 105–121). Calculation of the estimators, however, requires solving a nonsmooth and nonconvex minimization problem, resulting in high computational costs in implementing the bootstrap. We propose in this paper computationally simple resampling methods by convexfying Powells approach in the resampling stage. A major advantage of the new methods is that they can be implemented by efficient linear programming. Simulation studies show that the methods are reliable even with moderate sample sizes.

Collaboration


Dive into the Zhiliang Ying's collaboration.

Top Co-Authors

Avatar

D. Y. Lin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kani Chen

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoou Li

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

David M. Goldenberg

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Gongjun Xu

University of Michigan

View shared research outputs
Researchain Logo
Decentralizing Knowledge