Janis Hardwick
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Janis Hardwick.
Journal of Statistical Planning and Inference | 2002
Janis Hardwick; Quentin F. Stout
Abstract Optimal designs are presented for experiments in which sampling is carried out in stages. There are two Bernoulli populations and it is assumed that the outcomes of the previous stage are available before the sampling design for the next stage is determined. At each stage, the design specifies the number of observations to be taken and the relative proportion to be sampled from each population. Of particular interest are 2- and 3-stage designs. To illustrate that the designs can be used for experiments of useful sample sizes, they are applied to estimation and optimization problems. Results indicate that, for problems of moderate size, published asymptotic analyses do not always represent the true behavior of the optimal stage sizes, and efficiency may be lost if the analytical results are used instead of the true optimal allocation. The exactly optimal few stage designs discussed here are generated computationally, and the examples presented indicate the ease with which this approach can be used to solve problems that present analytical difficulties. The algorithms described are flexible and provide for the accurate representation of important characteristics of the problem.
Archive | 2001
Janis Hardwick; Quentin F. Stout
Several allocation rules are examined for the problem of optimizing a response function for a set of Bernoulli populations, where the population means are assumed to have a strict unimodal structure. This problem arises in dose response settings in clinical trials. The designs are evaluated both on their efficiency in identifying a good population at the end of the experiment, and in their efficiency in sampling from good populations during the trial. A new design, that adapts multi-arm bandit strategies to this unimodal structure, is shown to be superior to the designs previously proposed. The bandit design utilizes approximate Gittin’s indices and shape constrained regression.
Computational Statistics & Data Analysis | 1999
Janis Hardwick; Robert Oehmke; Quentin F. Stout
A program for optimizing and analyzing sequential allocation problems involving three Bernoulli populations and a general objective function is described. Previous researchers had considered this problem computationally intractable, and there appears to be no prior exact optimizations for such problems, even for very small sample sizes. This paper contains a description of the program, along with the techniques used to scale it to large sample sizes. The program currently handles problems of size 200 or more by using a modest parallel computer, and problems of size 100 on a workstation. As an illustration, the program is used to create an adaptive sampling procedure that is the optimal solution to a 3-arm bandit problem. The bandit procedure is then compared to two other allocation procedures along various Bayesian and frequentist metrics. Extensions enabling the program to solve a variety of related problems are discussed. c 1999 Published by Elsevier Science B.V. All rights reserved.
Biometrics | 2003
Janis Hardwick; Quentin F. Stout
We examine adaptive allocation designs for the problem of determining the optimal therapeutic dose for subjects in early-phase clinical trials. A subject can fail due to lack of efficacy or due to a toxic reaction. Successful subjects will have both a positive response and no toxic side effects. Thus, we seek to maximize the product of the nontoxicity and efficacy dose-response curves. We are interested in sampling rules that perform well along several criteria, including the ethical criterion that, as often as possible, experimental subjects be treated at or close to the maximum in question. Statistically, we wish to identify the optimum dose with high probability at the close of the experiment. Here, we propose designs that combine new allocation policies, directed walks, with new smoothed shape-constrained curve-fitting techniques. These are compared with a variety of other curve-fitting techniques and with up-and-down and equal allocation rules.
SIAM Journal on Scientific Computing | 1999
Janis Hardwick; Quentin F. Stout
Path induction is a technique used to speed the process of making multiple exact evaluations of a sequential allocation procedure, where the options are discrete and their outcomes follow a discrete distribution. Multiple evaluations are needed for determining criteria such as maxima or minima over parameter regions (where the location of the extremal value is unknown in advance), for visualizing characteristics such as robustness, or for obtaining the distribution of a statistic rather than just its mean. By using an initial phase to determine the number of paths reaching each terminal state, the subsequent evaluations are far faster than repeated use of standard evaluation techniques. Algorithms are given for fully sequential and staged sequential procedures, and the procedures can be either deterministic or random. The procedures can be generated by any technique (including dynamic programming or ad hoc approaches), and the evaluations performed can be quite flexible and need not be related to the method of obtaining the procedure. While the emphasis is on path induction, the techniques used to speed up the analyses of staged allocation procedures can also be used to improve backward induction for such procedures. If multiple evaluations need to be carried out, however, path induction will still be far superior. For each parameter configuration to be evaluated, one reduces the time by a factor of n, where n is the size of the experiment, by using path induction rather than the standard technique of backward induction. In some settings the savings is significantly greater than n.
Journal of the American Statistical Association | 1998
Janis Hardwick; Connie Page; Quentin F. Stout
Abstract To estimate a success probability p, two experiments are available: individual Bernoulli (p) trials or the product of τ individual Bernoulli (p) trials. This problem has its roots in reliability where either single components can be tested or a system of τ identical components can be tested. A total of N experiments can be performed, and the problem is to sequentially select some combination (allocation) of these two experiments, along with an estimator of p, to achieve low mean squared error (MSE) of the final estimate. This scenario is similar to that of the better-known group testing problem, but here the goal is to estimate failure rates rather than to identify defective units. The problem also arises in epidemiological applications such as estimating disease prevalence. Information maximization considerations, and analysis of the asymptotic MSE of several estimators, lead to the following adaptive procedure: use the maximum likelihood estimator (MLE) to estimate p, and if this estimator is b...
Sequential Analysis | 1996
Janis Hardwick; Quentin F. Stout
Suppose we wish to estimate the mean of some polynomial function of random variables from two independent Bernoulli populations, the parameters of which. rhemselves, are modeled as independent beta random variables. It. is assumed that the t.otal sample size for the experiment is fixed, but that the number of experimental units observed from each population may be random. This problem arises, ior example, when estimating the fault tolerance of a system by testing its compomentc individually. Using a decision theorebic approach, we seek to minimize the Bayes risk that arises from using a squared error loss function The Bayes estimator can lw detrmined in a straightforwardmanner, so the problem of optimal estimation rcduces. therefore, to a problem of optimallocatton of the samples between the two populatiorls. This can be solved via dynamic programming. Similar programming techniques are utilized to evaluate properties of a number of ad hoc allocation strategies that might also be collsidered for use in th...
Archive | 2001
Janis Hardwick; R. C. Oehmke; Quentin F. Stout
We propose a delayed response model for a Bernoulli 2-armed bandit. Patients arrive according to a Poisson process and their response times are exponential. We develop optimal solutions, and compare to previously suggested designs.
Scientific Programming | 2000
Robert Oehmke; Janis Hardwick; Quentin F. Stout
We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.
Archive | 1994
Donald A. Berry; Janis Hardwick
Combining different types of information is problematic. Some information, such as historical data, should be discounted, perhaps subjectively. We consider merging historical information with data from a clinical trial. The goal is to ascertain treatment effect in the patient population of the clinical trial. We develop a method that uses patient characteristics and partially discounts the historical data. The result is a probability distribution concerning treatment benefit as it depends on patient characteristics. Such a distribution allows for addressing the question of whether additional clinical experimentation is appropriate.