Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yi Tsong is active.

Publication


Featured researches published by Yi Tsong.


Journal of Biopharmaceutical Statistics | 2007

Choice of δ Noninferiority Margin and Dependency of the Noninferiority Trials

Yi Tsong; Joanne Zhang; Mark Levenson

For a two-arm active control clinical trial designed to test for noninferiority of the test treatment to the active control standard treatment, data of historical studies were often used. For example, with a cross-trial comparison approach (also called synthetic approach or λ-margin approach), the trial is conducted to test the hypothesis that the mean difference or the ratio between the current test product and the active control is no larger than a certain portion of the mean difference or no smaller that a certain portion of the ratio of the active control and placebo obtained in the historical data when the positive response indicates treatment effective. For a generalized historical control approach (also known as confidence interval approach or δ -margin approach), the historical data is often used to determine a fixed value noninferiority margin δ for all trials involving the active control treatment. The regulatory agency usually requires that the clinical trials of two different test treatments need to be independent and in most regular cases, it also requires to have two independent positive trials of the same test treatment in order to provide confirmatory evidence of the efficacy of the test product. Because of the nature of information (historical data) shared in active-controlled trials, the independency assumption of the trials is not satisfied in general. The correlation between two noninferiority tests has been examined which showed that it is an increasing function of (1 − λ ) when the response variable is normally distributed. In this article, we examine the relationship between the correlation of the two test statistics and the choice of the noninferiority margin, δ as well as the sample sizes and variances under the normality assumption. We showed that when δ is determined by the lower limit of the confidence interval of the adjusted effect size of the active control treatment (μC − μP) using data from historical studies, dependency of the two noninferiority tests can be very high. In order to control the correlation under 15%, the overall sample size of the historical studies needs to be at least five times of the current active control trial.


Journal of Biopharmaceutical Statistics | 2017

Development of statistical methods for analytical similarity assessment

Yi Tsong; Xiaoyu (Cassie) Dong; Meiyu Shen

ABSTRACT To evaluate the analytical similarity between the proposed biosimilar product and the US-licensed reference product, a working group at Food and Drug Administration (FDA) developed a tiered approach. This proposed tiered approach starts with a criticality determination of quality attributes (QAs) based on risk ranking of their potential impact on product quality and the clinical outcomes. Those QAs characterize biological products in terms of structural, physicochemical, and functional properties. Correspondingly, we propose three tiers of statistical approaches based on the levels of stringency in requirements. The three tiers of statistical approaches will be applied to QAs based on the criticality ranking and other factors. In this article, we discuss the statistical methods applicable to the three tiers of QA. We further provide more details for the proposed equivalence test as the Tier 1 approach. We also provide some discussion on the statistical challenges of the proposed equivalence test in the context of analytical similarity assessment.


Journal of Biopharmaceutical Statistics | 2007

Simultaneous Test for Superiority and Noninferiority Hypotheses in Active-Controlled Clinical Trials∗

Yi Tsong; Joanne Zhang

Two stage switching between testing for superiority (SUP) and noninferiority (NI) has been an important statistical issue in the design and analysis of the active-controlled clinical trials. Tsong and Zhang (2005) has shown that the Type I error rates do not change when switching between SUP and NI with the traditional generalized historical control (GHC) approach, however, they may change when switching with the cross-trial comparison (X-trial) approach. Tsong and Zhang (2005) further proposed a simultaneous test for both hypotheses to avoid the problem. The procedure was based on Fiellers confidence interval proposed by Hauschke et al. (1999). Since with the X-trial approach, using the simultaneous test, superiority is tested using all four treatment arms (current test and active control arms, active control and placebo arms in historical trials), the Type I error rate and power are expected to be somewhat different from the conventional superiority test (using the current test and active control arms only). Through a simulation study, we demonstrate that the Type I error rate and power between simultaneous test and the conventional superiority test are compatible. We also examine the impact of the assumption of equal variances of the current trial and the historical trial.


Journal of Biopharmaceutical Statistics | 2007

Parametric Two-Stage Sequential Quality Assurance Test of Dose Content Uniformity

Yi Tsong; Meiyu Shen

The United States Pharmacopeia (USP) content uniformity sampling acceptance plan consisting of a two-stage sampling plan with criteria on sample mean and number of out-of-range tablets is the standard for compendium. It is, however, often used mistakenly for lot quality assurance. In comparison to the Japan Phamacopeia (JP) procedure, USP procedure is less discriminative between lots with on-target mean and small variance and lots with off-target mean and large variance. The new European Pharmacopeia (EP) and USP harmonized test adopted a tolerance interval approach. But the “no-difference zone” criteria modification for off-target products make the approaches biased in favor of off-target products. We propose a parametric tolerance interval procedure to test a two-sided specification that is equivalent to the test of two one-sided hypotheses. Testing against a lower specification is to assure that the drug product is not under-dosed for the sake of efficacy. On the other hand, testing against an upper specification is to assure that the drug product is not over-dosed for the sake of safety. The operating curves of the proposed procedure are compared with those of the USP test to illustrate the difference in acceptance probability against the mean and variance of the lot.


Statistics in Medicine | 2013

Assessing overall evidence from noninferiority trials with shared historical data.

Guoxing Soon; Zhiwei Zhang; Yi Tsong; Lei Nie

For regulatory approval of a new drug, the United States Code of Federal Regulations (CFR) requires substantial evidence from adequate and well-controlled investigations. This requirement is interpreted in the Food and Drug Administration guidance as the need of at least two adequate and well-controlled studies, each convincing on its own to establish effectiveness. The guidance also emphasizes the need of independent substantiation of experimental results from multiple studies. However, several authors have noted the loss of independence between two noninferiority trials that use the same set of historical data to make inferences, raising questions about whether the CFR requirement is met in noninferiority trials through current practice. In this article, we first propose a statistical interpretation of the CFR requirement in terms of trial-level and overall type I error rates, which captures the essence of the requirement and can be operationalized for noninferiority trials. We next examine four typical regulatory settings in which the proposed requirement may or may not be fulfilled by existing methods of analysis (fixed margin and synthesis). In situations where the criteria are not met, we then propose adjustments to the existing methods. As illustrated with several examples, our results and findings can be helpful in designing and analyzing noninferiority trials in a way that is both compliant with the regulatory interpretation of the CFR requirement and reasonably powerful.


Journal of Biopharmaceutical Statistics | 2007

An Alternative Approach to Assess Exchangeability of a Test Treatment and the Standard Treatment with Normally Distributed Response

Yi Tsong; Meiyu Shen

In order to assess the equivalence of two treatments, clinical trials are designed to test against the null hypothesis that the difference (or ratio) of two means (proportions) is either smaller than a pre-specified lower equivalence limit or larger than a pre-specified upper equivalence limit. For example, in generic drug evaluation, such approach is defined as average bioequivalence. However, average equivalence type test is often criticized as lack of the ability to assess the exchangeability of the two treatments. In this article, we restate the statistical hypotheses in the form of stochastic inequalities. The stochastic statement can then be generalized to define the probability of exchangeability (i.e., coverage percentage) of the two treatments. The approach will be illustrated with a numeric example.


Journal of Biopharmaceutical Statistics | 2017

Wald tests for variance-adjusted equivalence assessment with normal endpoints

Yue-Ming Chen; Yu-Ting Weng; Xiaoyu Dong; Yi Tsong

ABSTRACT Equivalence tests may be tested with mean difference against a margin adjusted for variance. The justification of using variance adjusted non-inferiority or equivalence margin is for the consideration that a larger margin should be used with large measurement variability. However, under the null hypothesis, the test statistic does not follow a t-distribution or any well-known distribution even when the measurement is normally distributed. In this study, we investigate asymptotic tests for testing the equivalence hypothesis. We apply the Wald test statistic and construct three Wald tests that differ in their estimates of variances. These estimates of variances include the maximum likelihood estimate (MLE), the uniformly minimum variance unbiased estimate (UMVUE), and the constrained maximum likelihood estimate (CMLE). We evaluate the performance of these three tests in terms of type I error rate control and power using simulations under a variety of settings. Our empirical results show that the asymptotic normalized tests are conservative in most settings, while the Wald tests based on ML- and UMVU-method could produce inflated significance levels when group sizes are unequal. However, the Wald test based on CML-method provides an improvement in power over the other two Wald tests for medium and small sample size studies.


Journal of Biopharmaceutical Statistics | 2017

Statistical considerations regarding correlated lots in analytical biosimilar equivalence test

Meiyu Shen; Tianhua Wang; Yi Tsong

ABSTRACT In the evaluation of the analytical similarity data, an equivalence testing approach for most critical and quantitative quality attributes, which are assigned to Tier 1 in their proposed three-tier approach, was proposed. The Food and Drug Administration (FDA) has recommended the proposed equivalence testing approach to sponsors through meeting comments for Pre-Investigational New Drug Applications (PINDs) and Investigational New Drug Applications (INDs) since 2014. The FDA has received some feedback on the statistical issues of potentially correlated reference lot values subjected to equivalence testing since independent and identical observations (lot values) from the proposed biosimilar product and the reference product are assumed. In this article, we describe one method for correcting the estimation bias of the reference variability so as to increase the equivalence margin and its modified versions for increasing the equivalence margin and correcting the standard errors in the confidence intervals, assuming that the lot values are correlated under a few known correlation matrices. Our comparisons between these correcting methods and no correction for bias in the reference variability under several assumed correlation structures indicate that all correcting methods would increase the type I error rate dramatically but only improve the power slightly for most of the simulated scenarios. For some particular simulated cases, the type I error rate can be extremely large (e.g., 59%) if the guessed correlation is larger than the assumed correlation. Since the source of a reference drug product lot is unknown in nature, correlation between lots is a design issue. Hence, to obtain independent reference lot values by purchasing the reference lots at a wide time window often is a design remedy for correlated reference lot values.


Journal of Biopharmaceutical Statistics | 2007

Noninferiority testing beyond simple two-sample comparison.

Yi Tsong; Wen-Jen Chen

In order to fulfill the requirement of a new drug application, a sponsor often need to conduct multiple clinical trials. Often these trials are of designs more complicated than a randomized two-sample single-factor study. For example, these trials could be designed with multiple centers, multiple factors, covariates, group sequential and/or adaptive scheme, etc. When an active standard treatment used as the control treatment in a two-arm clinical trial, the efficacy of the test treatment is often established by performing a noninferiority test through comparison of the test treatment and the active standard treatment. Typically, the noninferiority trials are designed with either a generalized historical control approach (i.e., noninferiority margin approach or δ-margin approach) or a cross-trial comparison approach (i.e., synthesis approach or λ-margin approach). Many of the statistical properties of the approaches discussed in the literature were focused on testing in a simple two sample comparison form. We studied the limitations of the two approaches for the consideration of switching between superiority and noninferiority testing, feasibility to be applied with group sequential design, constancy assumption requirements, test dependency in multiple trials, analysis of homogeneity of efficacy among centers in a multi-center trial, data transformation and changing analysis method from the historical studies. Our evaluation shows that the cross-trial comparison approach is more restricted to simple two sample comparison with normal approximation test because of its poor properties with more complicated design and analysis. On the other hand, the generalized historical control comparison approach may have more flexible properties when the variability of the margin δ is indeed negligibly small.


Journal of Biopharmaceutical Statistics | 2017

Ratio of means vs. difference of means as measures of superiority, noninferiority, and average bioequivalence

Wanjie Sun; Stella Grosser; Yi Tsong

ABSTRACT Ratio of means (ROM) and difference of means (DOM) are often used in a superiority, noninferiority (NI), or average bioequivalence (ABE) test to evaluate whether the test mean is superior, NI, or equivalent to the reference (placebo or active control) mean. The literature provides recommendations regarding how to choose between ROM and DOM, mainly for superiority testing. In this article, we evaluated these two measures from other perspectives and cautioned the potential impact of different scoring systems/transformation for the same outcome (which is not rarely seen in practice) on the power of a ROM or DOM test for superiority, NI, or ABE. 1) For superiority, with the same margin, power remains the same for a location, scale, or combined shift (no other transformations) to scoring systems for both measures; however, for NI and ABE, different shifts can change the power of the test significantly. 2) Direction of scores (larger or smaller value indicating desirable effects) does not change the power for a DOM superiority, NI, or ABE test, but it does change the power tremendously for a ROM, NI, or ABE test. Caution should be taken when defining scoring systems. Data transformation is not encouraged in general, and if needed, should be statistically justified.

Collaboration


Dive into the Yi Tsong's collaboration.

Top Co-Authors

Avatar

Yue-Ming Chen

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge