Siu Hung Cheung
The Chinese University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Siu Hung Cheung.
Annals of Statistics | 2007
Li-Xin Zhang; Feifang Hu; Siu Hung Cheung; Wai-Sum Chan
Response-adaptive designs have been extensively studied and used in clinical trials. However, there is a lack of a comprehensive study of response-adaptive designs that include covariates, despite their importance in clinical trials. Because the allocation scheme and the estimation of parameters are affected by both the responses and the covariates, covariate-adjusted response-adaptive (CARA) designs are very complex to formulate. In this paper, we overcome the technical hurdles and lay out a framework for general CARA designs for the allocation of subjects to K (≥ 2) treatments. The asymptotic properties are studied under certain widely satisfied conditions. The proposed CARA designs can be applied to generalized linear models. Two important special cases, the linear model and the logistic regression model, are considered in detail.
The North American Actuarial Journal | 2011
Fsa Johnny Siu-Hang Li PhD; Wai-Sum Chan PhD, Fsa, Cera; Siu Hung Cheung
Abstract In recent years mortality has improved considerably faster than had been predicted, resulting in unforeseen mortality losses for annuity and pension liabilities. Actuaries have considered various models to make stochastic mortality projections, one of which is the celebrated Lee-Carter model. In using the Lee-Carter model, mortality forecasts are made on the basis of the assumed linearity of a mortality index, parameter k t , in the model. However, if this index is indeed not linear, forecasts will tend to be biased and inaccurate. A primary objective of this paper is to examine the linearity of this index by rigorous statistical hypothesis tests. Specifically, we consider Zivot and Andrews’ procedure to determine if there are any structural breaks in the Lee-Carter mortality indexes for the general populations of England and Wales and the United States. The results indicate that there exists a statistically significant structural breakpoint in each of the indexes, suggesting that forecasters should be extra cautious when they extrapolate these indexes. Our findings also provide sound statistical evidence for some demographers’ observation of an accelerated mortality decline after the mid-1970s.
Annals of Statistics | 2011
Li-Xin Zhang; Feifang Hu; Siu Hung Cheung; Wei Sum Chan
Urn models have been widely studied and applied in both scientific and social science disciplines. In clinical studies, the adoption of urn models in treatment allocation schemes has proved to be beneficial to researchers, by providing more efficient clinical trials, and to patients, by increasing the likelihood of receiving the better treatment. In this paper, we propose a new and general class of immigrated urn (IMU) models that incorporates the immigration mechanism into the urn process. Theoretical properties are developed and the advantages of the IMU models are discussed. In general, the IMU models have smaller variabilities than the classical urn models, yielding more powerful statistical inferences in applications. Illustrative examples are presented to demonstrate the wide applicability of the IMU models. The proposed IMU framework, including many popular classical urn models, not only offers a unify perspective for us to comprehend the urn process, but also enables us to generate several novel urn models with desirable properties.
Computational Statistics & Data Analysis | 1992
Siu Hung Cheung; Burt Holland
Abstract Dunnetts widely-used (Journ. Amer. Statist. Assoc. 50. 1955) procedure for one and two-sided comparisons of all active treatments with a control while maintaining a designated overall Type I error rate α was extended by Cheung and Holland (Biometrics 47, 1991) to the situation where one wishes to make all such comparisons simultaneously in each of r groups while maintaining α control over the probability of making any Type I errors. Whereas Cheung and Holland (1991) assumed a common sample size for all treatments in every group, the present paper discusses the frequently-encountered case where the treatment sample sizes may differ. The probability distributions of the appropriate statistics are derived for this case, and tables of the upper percentage points of these distributions are constructed with the aid of Dunnetts (Applied Statistics 38, 1989) algorithm. The application of this new procedure is illustrated with the reanalysis of sample data from an animal physiology experiment.
Journal of Statistical Planning and Inference | 2002
Koon Shing Kwong; Burt Holland; Siu Hung Cheung
When performing simultaneous statistical tests, the Type I error concept most commonly controlled by analysts is the familywise error rate, i.e., the probability of committing at least one Type I error. However, this criterion is unduly stringent for some practical situations and therefore may not be appropriate. An alternative concept of error control was provided by Benjamini and Hochberg (J. Roy. Statist. Soc. B 57 (1995) 289) who advocate control of the expected proportion of falsely rejected hypotheses which they term the false discovery rate or FDR. These authors devised a step-up procedure for controlling the FDR. In this article, when the joint distribution of test statistics is known, continuous, and positive regression dependent on each one from a subset of true null hypotheses, we derive and discuss a modification of their procedure which affords increased power. An example is provided to illustrate our proposed method.
Biometrics | 1991
Siu Hung Cheung; Burt Holland
Dunnetts (1955, Journal of the American Statistical Association 50, 1096-1121) widely used procedure for oneand two-sided comparisons of all active treatments with a control while maintaining a designated overall Type I error rate ca is extended to the situation where one wishes to make all such comparisons simultaneously in each of i groups while maintaining ca control over the probability of making any Type I errors. For the case of balanced sampling we derive the probability distribution of the appropriate statistics. Tables of the upper percentage points of these distributions are presented. The application of the new procedure is illustrated with the reanalyses of sample data from animal pathology and agricultural experiments.
Journal of The Royal Statistical Society Series B-statistical Methodology | 2002
Burt Holland; Siu Hung Cheung
A criticism of multiple-comparison procedures is that the family of inferences over which an error rate is controlled is often arbitrarily selected, yet the conclusion may depend heavily on the choice of the family. Such ambiguity is most likely in large exploratory studies requiring numerous simultaneous inferences. In ambiguous situations it is desirable that results of multiple-comparison procedures depend little on the chosen family. To assess this, we propose several familywise robustness criteria to evaluate such procedures, and we find some of their properties theoretically and by simulation. Procedures that control the false discovery rate seem to be familywise robust.
Statistics in Medicine | 2012
Koon Shing Kwong; Siu Hung Cheung; Anthony J. Hayter; Miin-Jye Wen
Non-inferiority (NI) trials are becoming increasingly popular. The main purpose of NI trials is to assert the efficacy of a new treatment compared with an active control by demonstrating that the new treatment maintains a substantial fraction of the treatment effect of the control. Most of the statistical testing procedures in this area have been developed for three-arm NI trials in which a new treatment is compared with an active control in the presence of a placebo. However, NI trials frequently involve comparisons of several new treatments with a control, such as in studies involving different doses of a new drug or different combinations of several new drugs. In seeking an adequate testing procedure for such cases, we use a new approach that modifies existing testing procedures to cover circumstances in which several new treatments are present. We also give methods and algorithms to produce the optimal sample size configuration. In addition, we also discuss the advantages of using different margins for the assay sensitivity test between the active control and the placebo and the NI test between the new treatments and the active control. We illustrate the new approach by using data from a clinical trial.
Biometrics | 1996
Siu Hung Cheung; Wai-Sum Chan
Turkeys (1953, The Problem of Multiple Comparisons, unpublished report, Princeton University) procedure is widely used for pairwise multiple comparisons in one-way ANOVA. It provides exact simultaneous pairwise confidence intervals (SPCI) for balanced designs and conservative SPCI for unbalanced designs. In this paper, we will extend Turkeys procedure to two-way unbalanced designs. Both the exact and the conservative methods will be introduced. The application of the new procedure is illustrated with sample data from two experiments.
Statistics in Medicine | 2010
Koon Shing Kwong; Siu Hung Cheung; Miin-Jye Wen
Step-up procedures have been shown to be powerful testing methods in clinical trials for comparisons of several treatments with a control. In this paper, a determination of the optimal sample size for a step-up procedure that allows a pre-specified power level to be attained is discussed. Various definitions of power, such as all-pairs power, any-pair power, per-pair power and average power, in one- and two-sided tests are considered. An extensive numerical study confirms that square root allocation of sample size among treatments provides a better approximation of the optimal sample size relative to equal allocation. Based on square root allocation, tables are constructed, and users can conveniently obtain the approximate required sample size for the selected configurations of parameters and power. For clinical studies with difficulties in recruiting patients or when additional subjects lead to a significant increase in cost, a more precise computation of the required sample size is recommended. In such circumstances, our proposed procedure may be adopted to obtain the optimal sample size. It is also found that, contrary to conventional belief, the optimal allocation may considerably reduce the total sample size requirement in certain cases. The determination of the required sample sizes using both allocation rules are illustrated with two examples in clinical studies.