Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guo-Liang Tian is active.

Publication


Featured researches published by Guo-Liang Tian.


Blood | 2008

Fc dependent expression of CD137 on human NK cells: insights into "agonistic" effects of anti-CD137 monoclonal antibodies

Wei Lin; Caroline J. Voskens; Xiaoyu Zhang; Daniel Schindler; Aaron Wood; Erin Burch; Yadong Wei; Lieping Chen; Guo-Liang Tian; Koji Tamada; Lai-Xi Wang; Dan H. Schulze; Dean L. Mann; Scott E. Strome

CD137 (4-1BB) is a costimulatory molecule that can be manipulated for the treatment of cancer and autoimmune disease. Although it is known that agonistic antibodies (mAbs) against CD137 enhance the rejection of murine tumors in a natural killer (NK) cell- and T cell-dependent fashion, the mechanism for NK dependence is poorly understood. In this study, we evaluated the ability of 2 different glycoforms of a chimerized antihuman CD137 mAb, an aglycosylated (GA) and a low fucose form (GG), to react with human NK cells. Both mAbs bound similarly to CD137 and partially blocked the interaction between CD137 and CD137 ligand. However, unlike GA mAb, immobilized GG mAb activated NK cells and enhanced CD137 expression. These effects were seemingly dependent on Fc interaction with putative Fc receptors on the NK-cell surface, as only the immobilized Fc-fragment of GG was required for CD137 expression. Furthermore, CD137 expression could be enhanced with antibodies directed against non-CD137 epitopes, and the expression levels directly correlated with patterns of Fc-glycosylation recognized to improve Fc interaction with Fc gamma receptors. Our data suggest that CD137 can be enhanced on NK cells in an Fc-dependent fashion and that expression correlates with phenotypic and functional parameters of activation.


The American Statistician | 2009

Sample surveys with sensitive questions: a nonrandomized response approach

Ming Tan; Guo-Liang Tian; Man-Lai Tang

Since the Warners randomized response (RR) model to solicit sensitive information was proposed in 1965, it has been used and extended in a broad range of surveys involving sensitive questions. However, it is limited, for example, by a lack of reproducibility and trust from the interviewees as well as higher cost due to the use of randomizing devices. Recent developments of the alternative non-randomized response (NRR) approach have shown the promise to alleviate or eliminate such limitations. However, the efficiency and feasibility of the NRR models have not been adequately studied. This article introduces briefly the NRR approach, proposes several new NRR models, compares the efficiency of the NRR and RR models and studies the feasibility of the NRR models. In addition, we propose the concept of the degree of privacy protection between the NRR model and the Warner model to reflect the extent the privacy is protected. These studies show that not only the NRR approach is free of the limitations of the randomized approach but also the NRR model actually increases the relative efficiency and the degree of privacy protection. Thus, the non-randomized response approach offers an attractive alternative to the randomized response approach.


Archive | 2009

Bayesian Missing Data Problems : EM, Data Augmentation and Noniterative Computation

Ming T. Tan; Guo-Liang Tian; Kai Wang Ng

Introduction Background Scope, Aim and Outline Inverse Bayes Formulae (IBF) The Bayesian Methodology The Missing Data Problems Entropy Optimization, Monte Carlo Simulation and Numerical Integration Optimization Monte Carlo Simulation Numerical Integration Exact Solutions Sample Surveys with Nonresponse Misclassified Multinomial Model Genetic Linkage Model Weibull Process with Missing Data Prediction Problem with Missing Data Binormal Model with Missing Data The 2 x 2 Crossover Trial with Missing Data Hierarchical Models Nonproduct Measurable Space (NPMS) Discrete Missing Data Problems The Exact IBF Sampling Genetic Linkage Model Contingency Tables with One Supplemental Margin Contingency Tables with Two Supplemental Margins The Hidden Sensitivity Model for Surveys with Two Sensitive Questions Zero-Inflated Poisson Model Changepoint Problems Capture-Recapture Model Computing Posteriors in the EM-Type Structures The IBF Method Incomplete Pro-Post Test Problems Right Censored Regression Model Linear Mixed Models for Longitudinal Data Probit Regression Models for Independent Binary Data A Probit-Normal GLMM for Repeated Binary Data Hierarchical Models for Correlated Binary Data Hybrid Algorithms: Combining the IBF Sampler with the Gibbs Sampler Assessing Convergence ofMCMC Methods Remarks Constrained Parameter Problems Linear Inequality Constraints Constrained Normal Models Constrained Poisson Models Constrained Binomial Models Checking Compatibility and Uniqueness Introduction Two Continuous Conditional Distributions: Product Measurable Space (PMS) Finite Discrete Conditional Distributions: PMS Two Conditional Distributions: NPMS One Marginal and Another Conditional Distribution Appendix: Basic Statistical Distributions and Stochastic Processes Discrete Distributions Continuous Distributions Mixture Distributions Stochastic Processes References Author Index Subject Index Problems appear at the end of each chapter.


Computational Statistics & Data Analysis | 2007

Predictive analyses for nonhomogeneous Poisson processes with power law using Bayesian approach

Jun-Wu Yu; Guo-Liang Tian; Man-Lai Tang

Nonhomogeneous Poisson process (NHPP) also known as Weibull process with power law, has been widely used in modeling hardware reliability growth and detecting software failures. Although statistical inferences on the Weibull process have been studied extensively by various authors, relevant discussions on predictive analysis are scattered in the literature. It is well known that the predictive analysis is very useful for determining when to terminate the development testing process. This paper presents some results about predictive analyses for Weibull processes. Motivated by the demand on developing complex high-cost and high-reliability systems (e.g., weapon systems, aircraft generators, jet engines), we address several issues in single-sample and two-sample prediction associated closely with development testing program. Bayesian approaches based on noninformative prior are adopted to develop explicit solutions to these problems. We will apply our methodologies to two real examples from a radar system development and an electronics system development.


Computational Statistics & Data Analysis | 2008

Statistical inference and prediction for the Weibull process with incomplete observations

Jun-Wu Yu; Guo-Liang Tian; Man-Lai Tang

In this article, statistical inference and prediction analyses for the Weibull process with incomplete observations via classical approach are studied. Specifically, observations in the early developmental phase of a testing program cannot be observed. We derive the closed-form expressions for the maximum likelihood estimates of the parameters in both the failure- and time-truncated Weibull processes. Confidence interval and hypothesis testing for the parameters of interest are considered. In addition, predictive inferences on future failures and the goodness-of-fit test of the model are developed. Two real examples from an engine system development study and a Boeing air-conditioning system development study are presented to illustrate the proposed methodologies.


Statistics in Medicine | 2009

Confidence intervals for a difference between proportions based on paired data

Man-Lai Tang; Man Ho Ling; Leevan Ling; Guo-Liang Tian

We construct several explicit asymptotic two-sided confidence intervals (CIs) for the difference between two correlated proportions using the method of variance of estimates recovery (MOVER). The basic idea is to recover variance estimates required for the proportion difference from the confidence limits for single proportions. The CI estimators for a single proportion, which are incorporated with the MOVER, include the Agresti-Coull, the Wilson, and the Jeffreys CIs. Our simulation results show that the MOVER-type CIs based on the continuity corrected Phi coefficient and the Tango score CI perform satisfactory in small sample designs and spare data structures. We illustrate the proposed CIs with several real examples.


Statistics in Medicine | 2009

Exact and approximate unconditional confidence intervals for proportion difference in the presence of incomplete data

Man-Lai Tang; Man Ho Ling; Guo-Liang Tian

Confidence interval (CI) construction with respect to proportion/rate difference for paired binary data has become a standard procedure in many clinical trials and medical studies. When the sample size is small and incomplete data are present, asymptotic CIs may be dubious and exact CIs are not yet available. In this article, we propose exact and approximate unconditional test-based methods for constructing CI for proportion/rate difference in the presence of incomplete paired binary data. Approaches based on one- and two-sided Walds tests will be considered. Unlike asymptotic CI estimators, exact unconditional CI estimators always guarantee their coverage probabilities at or above the pre-specified confidence level. Our empirical studies further show that (i) approximate unconditional CI estimators usually yield shorter expected confidence width (ECW) with their coverage probabilities being well controlled around the pre-specified confidence level; and (ii) the ECWs of the unconditional two-sided-test-based CI estimators are generally narrower than those of the unconditional one-sided-test-based CI estimators. Moreover, ECWs of asymptotic CIs may not necessarily be narrower than those of two-sided-based exact unconditional CIs. Two real examples will be used to illustrate our methodologies.


Statistical Methods in Medical Research | 2011

Sample size determination for the non-randomised triangular model for sensitive questions in a survey

Guo-Liang Tian; Man-Lai Tang; Zhenqiu Liu; Ming Tan; Nian-Sheng Tang

Sample size determination is an essential component in public health survey designs on sensitive topics (e.g. drug abuse, homosexuality, induced abortions and pre or extramarital sex). Recently, non-randomised models have been shown to be an efficient and cost effective design when comparing with randomised response models. However, sample size formulae for such non-randomised designs are not yet available. In this article, we derive sample size formulae for the non-randomised triangular design based on the power analysis approach. We first consider the one-sample problem. Power functions and their corresponding sample size formulae for the one- and two-sided tests based on the large-sample normal approximation are derived. The performance of the sample size formulae is evaluated in terms of (i) the accuracy of the power values based on the estimated sample sizes and (ii) the sample size ratio of the non-randomised triangular design and the design of direct questioning (DDQ). We also numerically compare the sample sizes required for the randomised Warner design with those required for the DDQ and the non-randomised triangular design. Theoretical justification is provided. Furthermore, we extend the one-sample problem to the two-sample problem. An example based on an induced abortion study in Taiwan is presented to illustrate the proposed methods.


Computational Statistics & Data Analysis | 2008

Efficient methods for estimating constrained parameters with applications to regularized (lasso) logistic regression

Guo-Liang Tian; Man-Lai Tang; Hong-Bin Fang; Ming Tan

Fitting logistic regression models is challenging when their parameters are restricted. In this article, we first develop a quadratic lower-bound (QLB) algorithm for optimization with box or linear inequality constraints and derive the fastest QLB algorithm corresponding to the smallest global majorization matrix. The proposed QLB algorithm is particularly suited to problems to which the EM-type algorithms are not applicable (e.g., logistic, multinomial logistic, and Coxs proportional hazards models) while it retains the same EM ascent property and thus assures the monotonic convergence. Secondly, we generalize the QLB algorithm to penalized problems in which the penalty functions may not be totally differentiable. The proposed method thus provides an alternative algorithm for estimation in lasso logistic regression, where the convergence of the existing lasso algorithm is not generally ensured. Finally, by relaxing the ascent requirement, convergence speed can be further accelerated. We introduce a pseudo-Newton method that retains the simplicity of the QLB algorithm and the fast convergence of the Newton method. Theoretical justification and numerical examples show that the pseudo-Newton method is up to 71 (in terms of CPU time) or 107 (in terms of number of iterations) times faster than the fastest QLB algorithm and thus makes bootstrap variance estimation feasible. Simulations and comparisons are performed and three real examples (Down syndrome data, kyphosis data, and colon microarray data) are analyzed to illustrate the proposed methods.


Computational Statistics & Data Analysis | 2007

On improved EM algorithm and confidence interval construction for incomplete rXc tables

Man-Lai Tang; Kai Wang Ng; Guo-Liang Tian; Ming Tan

Constructing confidence interval (CI) for functions of cell probabilities (e.g., rate difference, rate ratio and odds ratio) is a standard procedure for categorical data analysis in clinical trials and medical studies. In the presence of incomplete data, existing methods could be problematic. For example, the inverse of the observed information matrix may not exist and the asymptotic CIs based on delta methods are hence not available. Even though the inverse of the observed information matrix exists, the large-sample delta methods are generally not reliable in small-sample studies. In addition, existing expectation-maximization (EM) algorithm via the conventional data augmentation (DA) may suffer from slow convergence due to the introduction of too many latent variables. In this article, for rxc tables with incomplete data, we propose a novel DA scheme that requires fewer latent variables and this will consequently lead to a more efficient EM algorithm. We present two bootstrap-type CIs for parameters of interest via the new EM algorithm with and without the normality assumption. For rxc tables with only one incomplete/supplementary margin, the improved EM algorithm converges in only one step and the associated maximum likelihood estimates can hence be obtained in closed form. Theoretical and simulation results showed that the proposed EM algorithm outperforms the existing EM algorithm. Three real data from a neurological study, a rheumatoid arthritis study and a wheeze study are used to illustrate the methodologies.

Collaboration


Dive into the Guo-Liang Tian's collaboration.

Top Co-Authors

Avatar

Man-Lai Tang

Hang Seng Management College

View shared research outputs
Top Co-Authors

Avatar

Ming Tan

Georgetown University

View shared research outputs
Top Co-Authors

Avatar

Kai Wang Ng

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hong-Bin Fang

Georgetown University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Jun-Wu Yu

Hunan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yin Liu

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chi Zhang

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Zhenqiu Liu

Cedars-Sinai Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge