H. M. James Hung
Food and Drug Administration
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by H. M. James Hung.
Controlled Clinical Trials | 2002
Sue-Jane Wang; H. M. James Hung; Yi Tsong
Increasingly often, the study objective in an active controlled clinical trial without a placebo arm is to show that a new treatment is no less effective than the active control treatment within some noninferiority range. Two issues behind this objective are that of whether the new treatment is efficacious relative to a putative placebo and that of whether the new treatment preserves a certain fraction of effect of the active control. To address these issues, two types of statistical analysis methods are employed in recent pharmaceutical applications. In one type of method, a noninferiority margin is determined, and then the relative effect of the new treatment versus the control is compared against the margin to test noninferiority and the efficacy of the new treatment. In the other type of method, a synthetic statistic is constructed to directly estimate or test the effect of the new treatment relative to the putative placebo without resorting to noninferiority argument. Preservation of control effect can also be estimated and tested. These methods carry some crucial assumptions. The effect of active control is often estimated from a collection of historical placebo controlled trials using the random effects modeling of DerSimonian and Laird. In this work we find that statistical validity of the latter method rests highly on the assumptions that control effect is not reduced in the current active controlled trial population compared to the historical trials and that a normal approximation is appropriate in the random effects modeling. This type of method is very sensitive to departure from these assumptions. In contrast, the former method is ultraconservative in terms of type I error when the assumptions are met and can be anticonservative when control effect is substantially less in the active controlled trial than estimated from the historical placebo controlled trials.
Journal of Biopharmaceutical Statistics | 2002
Lu Cui; H. M. James Hung; Sue Jane Wang; Yi Tsong
Interpretation of subgroup findings is a difficult task. The attempt of this article is to clarify confusions on subgroup analysis and to give some practical suggestions on how to avoid mistakes in interpreting subgroup outcome. We believe that the correct interpretation of subgroup findings is closely related to the intrinsic statistical property and validity of the subgroup analysis. A systematic discussion on subgroup analysis from a statistical point of view will be helpful to clinical trial practitioners. *The views expressed in this paper are those of the authors but not necessarily those of Aventis Pharmaceuticals, Inc., nor U.S. Food and Drug Administration. This research was conducted when Dr. Lu Cui was a senior mathematical statistician at the FDA.
Journal of Biopharmaceutical Statistics | 2004
H. M. James Hung; Sue-Jane Wang
Abstract For noninferiority testing with the maximum allowable noninferiority margin being prespecified, one can perform valid statistical testing at the same alpha level for multiple noninferiority hypotheses with margins being smaller than this maximum margin. This is easily comprehensible because only one confidence level is used to assess which margins within the interval bounded by the maximum margin can be ruled out. If different confidence intervals are used, e.g., the interval generated from the intent-to-treat population is used for testing superiority and the interval generated from the per-protocol population is used for testing noninferiority, the problem of multiplicity will surface and the adjustment of alpha for each testing may be needed. All these predicate on the condition that at least a certain element of the maximum allowable noninferiority margin, whether it is the entire margin or the fraction of the active control effect to be retained, must be fixed in advance. None of these elements can be allowed to be influenced directly or indirectly by any analysis of the noninferiority trial data. Otherwise, the noninferiority analysis may be invalid.
European Neuropsychopharmacology | 2011
Sue-Jane Wang; H. M. James Hung; Robert T. O'Neill
In central nervous system therapeutic areas, there are general concerns with establishing efficacy thought to be sources of high attrition rate in drug development. For instance, efficacy endpoints are often subjective and highly variable. There is a lack of robust or operational biomarkers to substitute for soft endpoints. In addition, animal models are generally poor, unreliable or unpredictive. To increase the probability of success in central nervous system drug development program, adaptive design has been considered as an alternative designs that provides flexibility to the conventional fixed designs and has been viewed to have the potential to improve the efficiency in drug development processes. In addition, successful implementation of an adaptive design trial relies on establishment of a trustworthy logistics model that ensures integrity of the trial conduct. In accordance with the spirit of the U.S. Food and Drug Administration adaptive design draft guidance document recently released, this paper enlists the critical considerations from both methodological aspects and regulatory aspects in reviewing an adaptive design proposal and discusses two general types of adaptations, sample size planning and re-estimation, and two-stage adaptive design. Literature examples of adaptive designs in central nervous system are used to highlight the principles laid out in the U.S. FDA draft guidance. Four logistics models seen in regulatory adaptive design applications are introduced. In general, complex adaptive designs require simulation studies to access the design performance. For an adequate and well-controlled clinical trial, if a Learn-and-Confirm adaptive selection approach is considered, the study-wise type I error rate should be adhered to. However, it is controversial to use the simulated type I error rate to address a strong control of the study-wise type I error rate.
Journal of Biopharmaceutical Statistics | 2014
Sue-Jane Wang; H. M. James Hung
This regulatory research provides possible approaches for improvement to conventional subgroup analysis in a fixed design setting. The interaction-to-overall effects ratio is recommended in the planning stage for potential predictors whose prevalence is at most 50% and its observed ratio is recommended in the analysis stage for proper subgroup interpretation if sample size is only planned to target the overall effect size. We illustrate using regulatory examples and underscore the importance of striving for balance between safety and efficacy when considering a regulatory recommendation of a label restricted to a subgroup. A set of decision rules gives guidance for rigorous subgroup-specific conclusions.
Statistics in Medicine | 2012
Sue-Jane Wang; H. M. James Hung; Robert T. O'Neill
In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information design. We highlight the substantial risk of planning the sample size for confirmatory trials when information is very uninformative and stipulate the advantages of adaptive statistical information designs for planning exploratory trials. Practical experiences and strategies as lessons learned from more recent adaptive design proposals will be discussed to pinpoint the improved utilities of adaptive design clinical trials and their potential to increase the chance of a successful drug development.
Journal of Biopharmaceutical Statistics | 2012
Chi-Tian Chen; H. M. James Hung; Chin-Fu Hsiao
To speed up drug development to allow faster access to medicines for patients globally, conducting multiregional trials incorporating subjects from many countries around the world under the same protocol may be desired. Several statistical methods have been proposed for the design and evaluation of multiregional trials. However, in most of the recent approaches for sample size determination in multiregional trials, a common treatment effect of the primary endpoint across regions is usually assumed. In practice, it might be expected that there is a difference in treatment effect due to regional difference (e.g., ethnic difference). In this article, a random effect model for heterogeneous treatment effect across regions is proposed for the design and evaluation of multiregional trials. We also address consideration of the determination of the number of subjects in a specific region to establish the consistency of treatment effects between the specific region and the entire group.
Journal of the American College of Cardiology | 2012
John H. Lawrence; Steve Bai; H. M. James Hung; Robert T. O'Neill
To the Editor: The global treatment effect in a multinational trial design can be difficult to interpret when that global treatment effect does not reflect a consistent finding within the trial regional sites. This global treatment effect may not apply to the populations of each of the countries
Biometrical Journal | 2013
Sue Jane Wang; Frank Bretz; Alex Dmitrienko; Jason C. Hsu; H. M. James Hung; Mohammad F. Huque; Gary G. Koch
Motivated by a complex study design aiming at a definitive evidential setting, a panel forum among academia, industry, and US regulatory statistical scientists was held at the 7th International Conference on Multiple Comparison Procedures (MCP) to comment on the multiplicity problem. It is well accepted that studywise or familywise, type I error rate control is the norm for confirmatory trials. But, it is an uncharted territory regarding the criteria beyond a single confirmatory trial. The case example describes a Phase III program consisting of two placebo-controlled multiregional clinical trials identical in design intended to support registration for treatment of a chronic condition in the lung. The case presents a sophisticated multiplicity problem in several levels: four primary endpoints, two doses, two studies, two regions with different regulatory requirements, one major protocol amendment on the original statistical analysis plan, which the panelists had a chance to study before the forum took place. There were differences in professional perspectives among the panelists laid out by sections. Nonetheless, irrespective of the amendment, it may be arguable whether the two studies are poolable for the analysis of two primary endpoints prespecified. How should the study finding be reported in a scientific journal if one health authority approves while the other does not? It is tempting to address the Phase III program level multiplicity motivated by the increasing complexity of the partial hypotheses framework posed that are across studies. A novel thinking of the MCP procedures beyond individual-study level (studywise or familywise as predefined) and across multiple-study level (experimentwise and sometimes programwise) will become an important research problem expected to face with scientific and regulatory challenges.
Journal of Biopharmaceutical Statistics | 2012
H. M. James Hung; Sue-Jane Wang
Statistical testing in clinical trials can be complex when the statistical distribution of the test statistic involves a nuisance parameter. Some type of nuisance parameters such as standard deviation of a continuous response variable can be handled without too much difficulty. Other type of nuisance parameters, specifically associated with the main parameter under testing, can be difficult to handle. Without knowledge of the possible value of such a nuisance parameter, the maximum type I error associated with testing the main parameter may occur at an extreme value of the nuisance parameter. A well known example is the intersection-union test for comparing a combination drug with its two component drugs where the nuisance parameter is the mean difference between the two components. Knowledge of the possible range of value of this mean difference may help enhance the clinical trial design. For instance, if the interim internal data suggest that this mean difference falls into a possible range of value, then the sample size may be reallocated after the interim look to possibly improve the efficiency of statistical testing. This research sheds some light into possible power advantage from such a sample size reallocation at the interim look.