Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harry Yang is active.

Publication


Featured researches published by Harry Yang.


Journal of Biopharmaceutical Statistics | 2000

ROC SURFACE: A GENERALIZATION OF ROC CURVE ANALYSIS

Harry Yang; David Carlin

Receiver operating characteristic (ROC) curve analysis is widely used in biomedical research to assess the performance of diagnostic tests. Much of the work has been directed at developing accurate indices to describe ROC curves and appropriate statistics to test differences between them. The analysis, however, is largely built on the assumption that the test results are dichotomous. We generalize the ROC curve analysis to allow for tests to have more than two outcomes. The generalized ROC curve constitutes a surface. We propose to use the volume under the surface to measure the accuracy of a diagnostic test.


Dissolution Technologies | 2016

In Vitro Dissolution Curve Comparisons: A Critique of Current Practice

Dave LeBlond; Stan Altan; Steven Novick; John J. Peterson; Yan Shen; Harry Yang

Many pharmacologically active molecules are formulated as solid dosage form drug products. Following oral administration, the diffusion of an active molecule from the gastrointestinal tract into systemic distribution requires the disintegration of the dosage form followed by the dissolution of the molecule in the stomach lumen. Its dissolution properties may have a direct impact on its bioavailability and subsequent therapeutic effect. Consequently, dissolution (or in vitro release) testing has been the subject of intense scientific and regulatory interest over the past several decades. Much interest has focused on models describing in vitro release profiles over a time scale, and a number of methods have been proposed for testing similarity of profiles. In this article, we review previously published work on dissolution profile similarity testing and provide a detailed critique of current methods in order to set the stage for a Bayesian approach.


Journal of Biopharmaceutical Statistics | 2015

Dissolution Curve Comparisons Through the F2 Parameter, a Bayesian Extension of the f2 Statistic

Steven J. Novick; Yan Shen; Harry Yang; John J. Peterson; Dave LeBlond; Stan Altan

Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.


Journal of Biopharmaceutical Statistics | 2015

Testing assay linearity over a pre-specified range.

Harry Yang; Steven J. Novick; David LeBlond

Validation of linearity is a regulatory requirement. Although many methods are proposed, they suffer from several deficiencies including difficulties of setting fit-for-purpose acceptable limits, dependency on concentration levels used in linearity experiment, and challenges in implementation for statistically lay users. In this article, a statistical procedure for testing linearity is proposed. The method uses a two one-sided test (TOST) of equivalence to evaluate the bias that can result from approximating a higher-order polynomial response with a linear function. By using orthogonal polynomials and generalized pivotal quantity analysis, the method provides a closed-form solution, thus making linearity testing easy to implement.


Journal of Biopharmaceutical Statistics | 2009

Tolerance Limits for a Ratio of Normal Random Variables

Lanju Zhang; Thomas Mathew; Harry Yang; K. Krishnamoorthy; Iksung Cho

The problem of deriving an upper tolerance limit for a ratio of two normally distributed random variables is addressed, when the random variables follow a bivariate normal distribution, or when they are independent normal. The derivation uses the fact that an upper tolerance limit for a random variable can be derived from a lower confidence limit for the cumulative distribution function (cdf) of the random variable. The concept of a generalized confidence interval is used to derive the required lower confidence limit for the cdf. In the bivariate normal case, a suitable representation of the cdf of the ratio of the marginal normal random variables is also used, coupled with the generalized confidence interval idea. In addition, a simplified derivation is presented in the situation when one of the random variables has a small coefficient of variation. The problem is motivated by an application from a reverse transcriptase assay. Such an example is used to illustrate our results. Numerical results are also reported regarding the performance of the proposed tolerance limit.


Journal of Biopharmaceutical Statistics | 2015

Non-Normal Random Effects Models for Immunogenicity Assay Cut Point Determination

Jianchun Zhang; Binbing Yu; Lanju Zhang; Lorin Roskos; Laura Richman; Harry Yang

Administration of biological therapeutics can generate undesirable immune responses that may induce anti-drug antibodies (ADAs). Immunogenicity can negatively affect patients, ranging from mild reactive effect to hypersensitivity reactions or even serious autoimmune diseases. Assessment of immunogenicity is critical as the ADAs can adversely impact the efficacy and safety of the drug products. Well-developed and validated immunogenicity assays are required by the regulatory agencies as tools for immunogenicity assessment. Key to the development and validation of an immunogenicity assay is the determination of a cut point, which serves as the threshold for classifying patients as ADA positive(reactive) or negative. In practice, the cut point is determined as either the quantile of a parametric or nonparametric empirical distribution. The parametric method, which is often based on a normality assumption, may lead to biased cut point estimates when the normality assumption is violated. The non-parametric method, which yields unbiased estimates of the cut point, may have low efficiency when the sample size is small. As the distribution of immune responses are often skewed and sometimes heavy-tailed, we propose two non-normal random effects models for cut point determination. The random effects, following a skew-t or log-gamma distribution, can incorporate the skewed and heavy-tailed responses and the correlation among repeated measurements. Simulation study is conducted to compare the proposed method with the current normal and nonparametric alternatives. The proposed models are also applied to a real dataset generated from assay validation studies.


Journal of Biopharmaceutical Statistics | 2014

Sample Size Consideration for Immunoassay Screening Cut-Point Determination

Jianchun Zhang; Lanju Zhang; Harry Yang

Past decades have seen a rapid growth of biopharmaceutical products on the market. The administration of such large molecules can generate antidrug antibodies that can induce unwanted immune reactions in the recipients. Assessment of immunogenicity is required by regulatory agencies in clinical and nonclinical development, and this demands a well-validated assay. One of the important performance characteristics during assay validation is the cut point, which serves as a threshold between positive and negative samples. To precisely determine the cut point, a sufficiently large data set is often needed. However, there is no guideline other than some rule-of-thumb recommendations for sample size requirement in immunoassays. In this article, we propose a systematic approach to sample size determination for immunoassays and provide tables that facilitate its applications by scientists.


Journal of Biopharmaceutical Statistics | 2009

Test Homogeneity of Risk Difference Across Subgroups in Clinical Trials

Lanju Zhang; Harry Yang; Iksung Cho

A weighted least squares statistic is commonly used to test homogeneity of the risk difference for a series of 2 × 2 tables. Since the method is based on asymptotic theory, its type I error rate is inflated when the data are sparse. Two new methods for testing the homogeneity of risk difference across different groups in clinical trials are proposed in this paper. These methods are constructed, based on the Wilsons score test and traditional weighted least squares statistics. The performance of the new methods is evaluated and compared to the currently available approaches. Results show that one of our new methods has a type I error rate that is closest to the nominal level among all the methods and is much more powerful than those proposed by Lipsitz et al.


Pda Journal of Pharmaceutical Science and Technology | 2012

Implementation of Parallelism Testing for Four-Parameter Logistic Model in Bioassays

Harry Yang; Hyun Jun Kim; Lanju Zhang; Robert Strouse; Mark Schenerman; Xu-Rong Jiang

Parallelism is a prerequisite for the determination of relative potency in bioactivity assays. It involves the testing of similarity between a pair of dose-response curves of reference standard and test sample. The evaluation of parallelism is a requirement listed by both the United States Pharmacopeia (USP) and European Pharmacopeia (EP). The revised USP Chapters 〈1032〉 and 〈1034〉 suggest testing parallelism using an equivalence method. However, implementation of this method can be challenging for laboratories that lack experience in statistical analysis and software development. In this paper we present a customized assay analysis template that is developed based on a fully good manufacturing practice (GMP)-compliant software package. The template allows for automation of the USP-recommended equivalence parallelism testing method for 4PLmodel in bioassays. It makes the implementation of the USP guidance both practical and feasible. Use of the analysis template is illustrated through a practical example. LAY ABSTRACT: Parallelism is a prerequisite for the determination of relative potency in bioactivity assays. It involves the testing of similarity between a pair of dose-response curves of reference standard and test sample. The evaluation of parallelism is a requirement listed by both the United States Pharmacopeia (USP) and European Pharmacopeia (EP). The revised USP Chapters 〈1032〉 and 〈1034〉 suggest testing parallelism using an equivalence method. However, implementation of this method can be challenging for laboratories that lack experience in statistical analysis and software development. In this paper we present a customized assay analysis template that is developed based on a fully good manufacturing practice (GMP)-compliant software package. The template allows for automation of the USP-recommended equivalence parallelism testing method for 4-parameter logistic model in bioassays. It makes the implementation of the USP guidance both practical and feasible. Use of the analysis template is illustrated through a practical example.


Journal of Proteome Research | 2017

Statistical Models for the Analysis of Isobaric Tags Multiplexed Quantitative Proteomics

Gina D’Angelo; Raghothama Chaerkady; Wen Yu; Deniz Baycin Hizal; Sonja Hess; Wei Zhao; Kristen Lekstrom; Xiang Guo; Wendy I. White; Lorin Roskos; Michael A. Bowen; Harry Yang

Mass spectrometry is being used to identify protein biomarkers that can facilitate development of drug treatment. Mass spectrometry-based labeling proteomic experiments result in complex proteomic data that is hierarchical in nature often with small sample size studies. The generalized linear model (GLM) is the most popular approach in proteomics to compare protein abundances between groups. However, GLM does not address all the complexities of proteomics data such as repeated measures and variance heterogeneity. Linear models for microarray data (LIMMA) and mixed models are two approaches that can address some of these data complexities to provide better statistical estimates. We compared these three statistical models (GLM, LIMMA, and mixed models) under two different normalization approaches (quantile normalization and median sweeping) to demonstrate when each approach is the best for tagged proteins. We evaluated these methods using a spiked-in data set of known protein abundances, a systemic lupus erythematosus (SLE) data set, and simulated data from multiplexed labeling experiments that use tandem mass tags (TMT). Data are available via ProteomeXchange with identifier PXD005486. We found median sweeping to be a preferred approach of data normalization, and with this normalization approach there was overlap with findings across all methods with GLM being a subset of mixed models. The conclusion is that the mixed model had the best type I error with median sweeping, whereas LIMMA had the better overall statistical properties regardless of normalization approaches.

Collaboration


Dive into the Harry Yang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge