Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sue-Jane Wang is active.

Publication


Featured researches published by Sue-Jane Wang.


Journal of Biopharmaceutical Statistics | 2009

Some Controversial Multiple Testing Problems in Regulatory Applications

H. M. James Hung; Sue-Jane Wang

Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.


Pharmaceutical Statistics | 2010

Consideration of regional difference in design and analysis of multi‐regional trials

H. M. James Hung; Sue-Jane Wang; Robert O'Neill

Clinical trial strategy, particularly in developing pharmaceutical products, has recently expanded to a global level in the sense that multiple geographical regions participate in the trial simultaneously under the same study protocol. The possible benefits of this strategy are obvious, at least from the cost and efficiency considerations. The challenges with this strategy are many, ranging from trial or data quality assurance to statistical methods for design and analysis of such trials. In many regulatory submissions, the presence of regional differences in the estimated treatment effect, whether they are different only in magnitude or in direction, often presents great difficulty in interpretation of the global trial results, particularly for the acceptability by the local regulatory authorities. This article presents a number of useful statistical analysis tools for exploration of regional differences and a method that may be worth consideration in designing a multi-regional clinical trial.


Journal of Biopharmaceutical Statistics | 2007

Statistical Considerations for Testing Multiple Endpoints in Group Sequential or Adaptive Clinical Trials

H. M. James Hung; Sue-Jane Wang; Robert O'Neill

Many clinical trials are designed with a fixed sample size or total number of events to detect a postulated size of treatment effect on a primary efficacy endpoint. When the trial is completed and the primary efficacy endpoint achieves statistical significance, formal statistical testing of other clinically important secondary endpoints often follows in order for the statistically and clinically significant results of these endpoints to be included in the label of the test pharmaceutical product. In conventional fixed designs without any interim analysis or trial extension, these endpoints are often tested in a pre-specified hierarchical order, following the closed testing principle. This testing strategy ensures a strong control of the overall type I error. However, when trials are conducted using a group-sequential design with interim analyses or can be extended using an adaptive design with an increase of sample size or total number of events, this conventional hierarchical testing strategy may violate the closure principle and the overall type I error rate may not be controlled in the strong sense.


Biometrical Journal | 2009

Challenges and regulatory experiences with non-inferiority trial design without placebo arm.

H. M. James Hung; Sue-Jane Wang; Robert O'Neill

For a non-inferiority trial without a placebo arm, the direct comparison between the test treatment and the selected positive control is in principle the only basis for statistical inference. Therefore, evaluating the test treatment relative to the non-existent placebo presents extreme challenges and requires some kind of bridging from the past to the present with no current placebo data. For such inference based partly on an indirect bridging manipulation, fixed margin method and synthesis method are the two widely discussed methods in the recent literature. There are major differences in statistical inference paradigm between the two methods. The fixed margin method employs the historical data that assess the performances of the active control versus a placebo to guide the selection of the non-inferiority margin. Such guidance is not part of the ultimate statistical inference in the non-inferiority trial. In contrast, the synthesis method connects the historical data to the non-inferiority trial data for making broader inferences relating the test treatment to the non-existent current placebo. On the other hand, the type I error rate associated with the direct comparison between the test treatment and the active control cannot shed any light on the appropriateness of the indirect inference for faring the test treatment against the non-existent placebo. This work explores an approach for assessing the impact of potential bias due to violation of a key statistical assumption to guide determination of the non-inferiority margin.


Biometrical Journal | 2010

Challenges to multiple testing in clinical trials

H. M. James Hung; Sue-Jane Wang

Multiple testing problems are complex in evaluating statistical evidence in pivotal clinical trials for regulatory applications. However, a common practice is to employ a general and rather simple multiple comparison procedure to handle the problems. Applying multiple comparison adjustments is to ensure proper control of type I error rates. However, in many practices, the emphasis of the type I error rate control often leads to a choice of a statistically valid multiple test procedure but the common sense is overlooked. The challenges begin with confusions in defining a relevant family of hypotheses for which the type I error rates need to be properly controlled. Multiple testing problems are in a wide variety, ranging from testing multiple doses and endpoints jointly, composite endpoint, non-inferiority and superiority, to studying time of onset of a treatment effect, and searching for minimum effective dose or a patient subgroup in which the treatment effect lies. To select a valid and sensible multiple test procedure, the first step should be to tailor the selection to the study questions and to the ultimate clinical decision tree. Then evaluation of statistical power performance should come in to play in the next step to fine tune the selected procedure.


Journal of Biopharmaceutical Statistics | 2007

Issues with Statistical Risks for Testing Methods in Noninferiority Trial Without a Placebo ARM

H. M. James Hung; Sue-Jane Wang; Robert O'Neill

Noninferiority trials without a placebo arm often require an indirect statistical inference for assessing the effect of a test treatment relative to the placebo effect or relative to the effect of the selected active control treatment. The indirect inference involves the direct comparison of the test treatment with the active control from the noninferiority trial and the assessment, via some type of meta-analyses, of the effect of the active control relative to a placebo from historical studies. The traditional within-noninferiority-trial Type I error rate cannot ascertain the statistical risks associated with the indirect inference, though this error rate is of the primary consideration under the frequentist statistical framework. Another kind of Type I error rate, known as across-trial Type I error rate, needs to be considered in order that the statistical risks associated with the indirect inference can be controlled at a small level. Consideration of the two kinds of Type I error rates is also important for defining a noninferiority margin. For the indirect statistical inference, the practical utility of any method that controls only the across-trial Type I error rate at a fixed small level is limited.


Journal of Biopharmaceutical Statistics | 2011

Regulatory Perspectives on Multiplicity in Adaptive Design Clinical Trials throughout a Drug Development Program

Sue-Jane Wang; H. M. James Hung; Robert O'Neill

A clinical research program for drug development often consists of a sequence of clinical trials that may begin with uncontrolled and nonrandomized trials, followed by randomized trials or randomized controlled trials. Adaptive designs are not infrequently proposed for use. In the regulatory setting, the success of a drug development program can be defined to be that the experimental treatment at a specific dose level including regimen and frequency is approved based on replicated evidence from at least two confirmatory trials. In the early stage of clinical research, multiplicity issues are very broad. What is the maximum tolerable dose in an adaptive dose escalation trial? What should the dose range be to consider in an adaptive dose-ranging trial? What is the minimum effective dose in an adaptive dose-response study given the tolerability and the toxicity observable in short term or premarketing trials? Is establishing the dose-response relationship important or the ability to select a superior treatment with high probability more important? In the later stage of clinical research, multiplicity problems can be formulated with better focus, depending on whether the study is for exploration to estimate or select design elements or for labeling consideration. What is the study objective for an early-phase versus a later phase adaptive clinical trial? How many doses are to be studied in the early exploratory adaptive trial versus in the confirmatory adaptive trial? Is the intended patient population well defined or is the applicable patient population yet to be adaptively selected in the trial due to the potential patient and/or disease heterogeneity? Is the primary efficacy endpoint well defined or still under discussion providing room for adaptation? What are the potential treatment indications that may adaptively lead to an intended-to-treat patient population and the primary efficacy endpoint? In this work we stipulate the multiplicity issues with adaptive designs encountered in regulatory applications. For confirmatory adaptive design clinical trials, controlling studywise type I error and type II error is of paramount importance. For exploratory adaptive trials, we define the probability of correct selection of design features, e.g., dose, effect size, and the probability of correct decision for drug development. We assert that maximizing these probabilities would be critical to determine whether the drug development program continues or how to plan the confirmatory trials if the development continues.


Journal of Biopharmaceutical Statistics | 2012

Ethnic Sensitive or Molecular Sensitive Beyond All Regions Being Equal in Multiregional Clinical Trials

Sue-Jane Wang; H. M. James Hung

For decades, clinical trials have been the primary mechanism for medical products to enter the marketplace. Over more than a decade, globalization of medical product development via a multiregional clinical trial (MRCT) approach has generated greater enthusiasm because of tangible benefits in terms of cost and time for drug development. There are, however, many challenges including and not limited to design issues, statistical analysis methods, interpretation of extreme region performance, and in-process quality assurance issues. This article presents a number of examples to exemplify regional variability expected versus precision of treatment effect estimates that are generally impacted by the type of primary efficacy endpoint evaluated. We explore region-driven intrinsic and extrinsic ethnic factors for potential explanation of regional heterogeneity caused by differences in medical practice and / or disease etiology. Bayesian credible interval may be considered as a viable approach to assess the robustness of region-specific treatment effect. Ethnic-sensitive or molecular-sensitive region-driven designs may be explored to prospectively address the potential regional heterogeneity versus the potential predictiveness of causal genetic variants or molecular target biomarkers on treatment effect.


Biometrical Journal | 2013

Multiple comparisons in complex clinical trial designs.

H. M. James Hung; Sue-Jane Wang

Multiple comparisons have drawn a great deal of attention in evaluation of statistical evidence in clinical trials for regulatory applications. As the clinical trial methodology is increasingly more complex to properly take into consideration many practical factors, the multiple testing paradigm widely employed for regulatory applications may not suffice to interpret the results of an individual trial and of multiple trials. In a large outcome trial, an increasing need of studying more than one dose complicates a proper application of multiple comparison procedures. Additional challenges surface when a special endpoint, such as mortality, may need to be tested with multiple clinical trials combined, especially under group sequential designs. Another interesting question is how to study mortality or morbidity endpoints together with symptomatic endpoints in an efficient way, where the former type of endpoints are often studied in only one single trial but the latter type of endpoints are usually studied in at least two independent trials. This article is devoted to discussion of insufficiency of such a widely used paradigm applying only per-trial based multiple comparison procedures and to expand the utility of the procedures to such complex trial designs. A number of viable expanded strategies are stipulated.


Journal of Biopharmaceutical Statistics | 2016

Rejoinder to Dr. Cyrus R. Mehta.

Hung Hm; Sue-Jane Wang; Yang P

We thank the author for his comments on our article. His letter covers a few relevant points for the context of possibly adjusting the sample size during the course of a conventional fixed design clinical trial. In this response, we intend to highlight them for sharing. In a two-stage adaptive design clinical trial (adaptation refers to sample size adjustment herein), by combining the Z tests from the two stages with any fixed weights satisfying the condition that the sum of the squared weights is equal to one, the resulting weighted Z test (e.g., CHW test) using a proper fixed percentile of the standard normal distribution as the critical value controls the unconditional Type I error probability of this test, but it has the undesirable property that patients in the two stages are treated unequally in statistical analysis. The Type I error control of the weighted Z test does not depend on any adaptation rule; its conditional Type I error probability may not be controlled. In contrast, the conventional Z test in the presence of sample size reestimation enjoys the “one patient one vote” property, but it may need to use a nonconstant critical value that depends on the realization of the first-stage data, mostly through the first-stage Z statistic. By use of such a nonconstant critical value perceived as an additional flexibility to control its conditional Type I error, the conventional Z test controls the unconditional Type I error, but this additional flexibility exchanges for the requirement of strict adherence to the trial design specification, which may be difficult to guarantee or verify, as pointed out by Hung et al. (2014) and acknowledged in his letter. The letter also points out that a simple solution to this adherence issue is using a proper fixed percentile of the standard normal distribution as the fixed critical value such that the resulting rejection region of the conventional Z test is conservative on unconditional Type I error, regardless of the value of the first-stage Z test or equivalently conditional or predictive power, as proposed in Mehta and Pocock (2011). The letter still falls short of mentioning the required condition that the sample size will never be decreased as pointed out in Mehta and Pocock (2011). This condition may also be difficult to guarantee or verify in practice, as pointed out by Hung et al. (2014).

Collaboration


Dive into the Sue-Jane Wang's collaboration.

Researchain Logo
Decentralizing Knowledge