Application of Bayesian Dynamic Linear Models to Random Allocation Clinical Trials
Albert. H. Lee III, Edward L Boone, Roy T. Sabo, Erin Donahue
AApplications of Bayesian DynamicLinear Models to Random AllocationClinical Trials
Journal TitleXX(X):1–7c (cid:13)
SAGE
Albert H. Lee III , Edward L. Boone , Roy T Sabo , Erin Donahue Abstract
Random allocation models used in clinical trials aid researchers in determining which of a particular treatment providesthe best results by reducing bias between groups. Often however, this determination leaves researchers battling ethicalissues of providing patients with unfavorable treatments. Many methods such as Play the Winner and Randomized Playthe Winner Rule have historically been utilized to determine patient allocation, however, these methods are prone tothe increased assignment of unfavorable treatments. Recently a new Bayesian Method using Decreasingly InformativePriors has been proposed by , and later . Yet this method can be time consuming if MCMC methods are required. Wepropose the use of a new method which uses Dynamic Linear Model (DLM) to to increase allocation speed while alsodecreasing patient allocation samples necessary to identify the more favorable treatment. Furthermore, a sensitivityanalysis is conducted on multiple parameters. Finally, a Bayes Factor is calculated to determine the proportion ofunused patient budget remaining at a specified cut off and this will be used to determine decisive evidence in favor ofthe better treatment. Keywords
Bayes Factor, Dynamic Linear Model, Random Allocation, Clinical Trials, Time Series
Introduction
Clinical trials are controlled methods by which researchersmay “obtain sound scientific evidence for supporting theadoption of new therapies in clinical medicine” . Clinicaltrials are defined by “to consist of at least two groupsof patients who are as similar as possible except for theadministered treatment whereby the groups are decidedthrough randomization”. Extensive research has be donein the randomization of clinical trials. The most commonapproach consists of equally allocating the same numberof subjects to two treatments. Yet, pointed out thismethod suffers from ethical issues provided one drugis superior, while also possessing a less than adequateparameter estimating ability. Thus one would like to beable to sequentially allocate participants in such a way thatthe randomization remains preserved, while also skewingparticipants to the better treatment. This is known as adaptiveallocation.Methods for adaptively allocating subjects betweentreatments include the earliest works conducted by whowas the first to look at what has become known as theadaptive design. Additional adaptive design works includethose of , as well as . Likewise, contribution to adaptiveallocation led to the Play the Winner Rule which allocatespatients to future successful trials based on the successof one trial or failure of the other (see for details).While suggests this method can be a useful substitutefor equal allocation, he indicates there is lower powerwhen compared to equal allocation models. The Play theWinner Rule was modified by into the Randomized Playthe Winner Rule. Further works include those of as wellas . examined a comparison between the optimal design of and the approach of for binary outcomes using aBayesian approach. Likewise, used a Bayesian approachto create what he termed “Decreasingly Informative Prior”information to examine how adaptive allocation performedon binary variables. Each of these aforementioned methodsare a type of urn randomization method and as such, each ofthese methods have binary responses leading to proportionalallocation. For more on urn randomization methods and theirproperties see .Another method used is the Bayesian Adaptive Design inwhich assignment of either treatment or control is conductedthrough adaptive allocation. Extensive work has been donein this area including the works of who determinedthe method provided “improved patient outcomes andincreased power” along with a “lower expected samplesize” in a three arm trial in which one treatment wasactually better than the others. Another area this methodhas been used is in the Recurrent Glioblastoma trialconducted by . They concluded “the use of Bayesianadaptive designs in glioblastoma trials would result in trialsrequiring substantially fewer overall patients, with morepatients being randomly assigned to efficacious arms”. The Department of Statistical Sciences and Operations Research, VirginiaCommonwealth University, Richmond, Virginia 23284 USA; Department of Biostatistics, Virginia Commonwealth University,Richmond, Virginia, 23284 USA; Levine Cancer Institute, Charlotte, North Carolina, 28204 USA;
Corresponding author:
Albert H. Lee III, Department of Statistical Sciences and OperationsResearch, Virginia Commonwealth University, Richmond, Virginia 23284USA;Email: [email protected]
Prepared using sagej.cls [Version: 2017/01/17 v1.20] a r X i v : . [ s t a t . A P ] A ug Journal Title XX(X) lung cancer study of utilized a probit model and wasfound, along with a suitable early stopping rule, to be anethical design which can be used to improve personalizedmedicine. Bayesian adaptive design has been used to designa trial to analyze acute heart failure syndromes by .They determined this type of “clinical trial representsan innovative and potentially paradigm-shifting method ofstudying personalized treatment options for AHFS”.Regardless of the previously mentioned designs chosen,the y patients enter a random allocation study sequentially atdifferent times. Thus patients entering a random allocationstudy may be considered a set of time series measurements.Furthermore, throughout the trial there will be a total of N patients. Let T be the index set for patient y t measured ina total of N patients. Because these y t patients enter theallocation study sequentially, more information regardingallocation to the better treatment is known at, say, patient y ,than at patient y . This allows the researchers to learn moreinformation about treatment effectiveness as patients enterthe study. However, with the Bayesian allocation designs,this information becomes a Bayesian Learning Design; asthe information is updated, the Bayesian design learns whichtreatment is better.Bayesian adaptive designs use Bayesian updating methodsto allocate subjects to treatments. The ability of using theposterior as the prior through repeated updating makesthese Bayesian methods “a natural framework for makingdecisions based on accumulating data during a clinicaltrial” . Furthermore, this updating ability provides as afortuitous side effect, according to “the ability to quantifywhat is going to happen in a trial from any point on(including from the beginning), given the currently availabledata”. Bayesian Methods
The basic premise surrounding Bayesian methods is knownas Bayes rule, named after Rev. Thomas Bayes whopostulated the probability of some unknown parameter θ ,given the corresponding observations y , was simply the ratioof the probability of the joint density function p ( θ, y ) to theprobability we observe the value y . Mathematically speakingthis is p ( θ | y ) = p ( θ ) p ( y | θ ) /p ( y ) (1)where p ( θ | y ) is now the posterior, or updated distribution for θ given some y and p ( θ ) p ( y | θ ) = p ( θ, y ) (2)According to p ( θ ) is some prior distribution ofparameters and p ( y | θ ) is the sampling distribution such thatconditioning on the known y data will lead to the posteriordistribution (See for more details.) This idea has beenextended upon by for time series data. The learning ability available through this updating process in these Bayesianmethods has been extended by using Dynamic LinearModels (DLM). The DLM uses this Bayesian LearningProcess to update and forecast the y observations such that Y t = F (cid:48) t θ t + ν t (3) θ t = G t θ t − t + ω t where ν t ∼ N (0 , V t ) (4) ω t ∼ N (0 , W t ) Here, θ t represent the forecast parameter F t where F t is a known n × r matrix of independent variables, G t is aknown n × n system matrix, W t is a known n × n evolutionvariance matrix, and V t is a known r × r observationalvariance matrix.The prior forecast parameter θ t is found by noting ( θ t − | D t − ) ∼ N ( m t − , C t − ) for some mean m t − and variance matrix C t − . The prior for θ t may be seen tobe ( θ t | D t − ) ∼ N ( a t , R t ) whereby a t = G t m t − with R t = G t C t − G (cid:48) t + W t . The one step ahead forecast iscalculated as ( Y t | D t − ) ∼ N ( f t , Q t ) . Here, f t is the currenttreatment allocation for patient y , while Q t is the forecastallocation variance for patient y . The posterior for θ t relieson ( θ t − | D t − ) ∼ N ( m t , C t ) Furthermore, m t = m t − + A t e t , where m t repre-sents the current mean matrix, C t = R t − A t Q t A (cid:48) t where C t is the current variance matrix, A t = R t F t Q − t where A t is the adaptive coefficient, and e t = Y t − f t representsthe error term. Random Allocation Methods
Random Allocation models of proposed the solution tominimizing the responses by using w A = Q At √ f Bt Q At √ f Bt + Q Bt √ f At if ( f A t < f B t | Q At √ f Bt Q Bt √ f At > ) Q At √ f Bt Q At √ f Bt + Q Bt √ f At if ( f A t > f B t | Q At √ f Bt Q Bt √ f At < ) Otherwise (5) w B = 1 − w A as an optimal method to obtain weighted allocation values.However, demonstrated the design of was slightly flawedfor negative values involving at least one of either f A t or f B t .The optimal design solution posed by was shown to be w A = Q A t √ γ B t Q A t √ γ B t + Q B t √ γ A t (6) w B = 1 − w A where γ A = Φ f A t − f B t (cid:113) Q A t + Q B t ,γ B = Φ f B t − f A t (cid:113) Q A t + Q B t Prepared using sagej.cls ee and Boone Recently, examined how a Decreasingly Informative Priordistribution impacted the allocation using each of theseequations. The current work uses the DLM to randomlyallocate patients to examine these impacts. Yet because theDLM is an updating method at each value, the values for eachof f A t , f B t , Q A t , Q B t will change at each iteration, leadingto different weight values based on the starting values. Algorithm
To generate the allocation values1. Initiate the DLM by selecting initial values for µ A , µ B , ω t , C t A , C t B , Q t A , Q t B .2. Calculate predicted values and variances f A t ( F t =[1 , ), f B t ( F t = [1 , ), Q A t and Q B t
3. Compute w A and w B
4. Sample a Uniform(0,1) random variable U andcompare w A
5. If w A < U , allocate to Treatment A ( F t = [1 , ),otherwise allocate to treatment B ( F t = [1 , )6. Conduct experiment and observe y t
7. Update the DLM and return to step 2
Simulation Study
Seven scenarios were examined in and these values may beobserved in Table 1. Simulation sizes of 1,000 and 10,000were considered and run for several scenarios however theresults were almost identical, therefore, in order to avoidany unnecessary computation time the DLM was used torandomly allocate each scenario through 1000 simulations.As in treatment allocation probabilities, total number ofallocations in each treatment group, and total number ofsuccesses was recorded, however, the current authors haveonly included the treatment allocation associated with thepreferred treatment and these may be seen in Table 2.Although utilized Bayesian updating to obtain the valuesof the decreasingly Informative Prior, each iteration wasmanually done, leading to a large completion time due tothe extensive number of necessary simulation calculationruns. In the current method using the DLM, these timeswere greatly reduced. Each scenario was run using R Studioversion 1.2.1335 on an ACER computer with an AMD Ryzen5 2500U with Radeon Vega Mobile Gfx 2.00 GHz processorand 8.00 GB of RAM using Windows 10. Additionally,each run took approximately 45 seconds to complete, withthe longest run time 164 seconds corresponding to thebudget size N = 200 , while the shortest run time 23 secondscorresponding to a budget size N = 34 . Table 1.
Simulation ScenariosScenario Differences Standard Planned SampleDeviation Budget1 0 20 1282 10 15 743 10 20 1284 10 25 2005 20 20 346 20 25 527 20 30 74
The results for a mean difference of 0 and standarddeviation of 20 may be seen in Table 2 and a plot of bothequal and unequal allocation may be observed in Figure 1below. The mean number of allocations was obtained usingeach method. Notice the mean allocation using the methodof was 63.538, which is as expected, given the probabilityof allocation to Treatment A was 0.5. One may observe thisoutcome in Figure 1a where no allocation differences exist . Table 2.
Treatment Group Mean Sample Size. Italicized valuesindicate Treatment B was selected
Mean Standard Sample Equation 5 Equation 6Difference Deviation Budget Allocation Allocation0 20 128 .
10 15 74 .
10 20 128 .
10 25 200
20 20 34 .
20 25 52 .
20 30 74
When the DLM was applied to the unequal method of ,the mean number applied to Treatment A is 95.255, while themean number allocated to Treatment B is 32.745. Under themethods of , , and , the smaller value was taken to be thebetter allocation, therefore, it appears as though Treatment Bis the favorable treatment.Figure 1 Here titled AllocationFirstFormulaNoCovariate a). Equal Allocation b). Unequal Allocation Figure 1.
Comparison Between Equal and Unequal Allocation
An examination of Figure 1b illustrates the allocationprobabilities for both Treatment A and Treatment B.Each allocation begins at 0.5, however, dependent uponthe particular treatment which was allocated, the weightseither increase or decrease. The mean allocation weight fortreatment A was 0.749, while the mean weight for TreatmentB was 0.251. The weighted values for Treatment A are seenin Figure 1b as the red line, while those for Treatment Bare noticeably the opposite. This is due to the symmetrybetween the two weighting schemes. Problematic to thesetwo methods was the fact that with equal variance, thetreatment allocation weights remained at approximately 0.5in the method of , while using the method proposed by ,the treatment allocation proportions immediately converged.However, determining behavior of the treatment allocationweights upon varying the parameter values associated withthe mean, system variance and observational variance valuesis important in determining model behavior. By analyzingmodel behavior through these parameter modifications Prepared using sagej.cls
Journal Title XX(X) clinical trial researchers can determine the minimum numberof subjects necessary to detect the favorable treatment,enabling them to conclude the study earlier, thereby avoidingthe ethical issues presented by the continuation of providingunfavorable treatments.Therefore, the current authors chose a budget size of 100and a sensitivity analysis was conducted using various valuesfor µ B , ω t , and c t B , while keeping Q t = 1 . The valueschosen for µ B were 1 - 5, leading to H A : µ B = 1 through H A : µ B = 5 . This lead to the hypothesis H : µ B = 0 H A : µ B (cid:54) = 0 (7)whereby µ B = 1 , , , , By keeping Q t = 1 , and usingthe patient budge size of 100, the values chosen for µ B represented a 1% to a 5% difference in the two treatments.The values for ω t were chosen as 0.1, 0.01, and 0.001,which represent decreased variability between times, therebyincreasing certainty of between time variability impact.Finally, the values for c t B were chosen to be 0.1, 0.001,and 0.000001. These values were chosen to represent anincreased knowledge group B has no effect. Some of theseweighted allocation proportion values may be observed inFigure 2. It must be noted these were not all the weightedallocation proportion values, and these represent each of the µ B values chosen, and each of the ω t values chosen, but onlythe c t B = 0 . to illustrate the impact. Using a mean µ B = 1 with ω t = 0 . the mean proportion of allocationvalues to treatment A was 0.607, while the mean proportionallocated to treatment B was 0.393, which may be observedin Figure 2a. Furthermore, the mean number at which thetreatment allocation switched from B to A was 39.749.Compare this to the treatment proportions when ω t = 0 . inFigure 2b. Here, the mean proportion of allocation values totreatment A was 0.595, while the mean proportion allocatedto treatment B was 0.405. Likewise, the mean numberat which the treatment allocation switched from B to Awas 41.281. Finally, letting ω t = 0 . one may observein Figure 2c the mean proportion of allocation values totreatment A was 0.538, while the mean proportion allocatedto treatment B was 0.462, with the mean number at whichthe treatment allocation switched from B to A was 46.730.Next the mean was increased to 3, µ B = 3 and the analysiswas conducted. When using ω t = 0 . the mean proportion ofallocation values to treatment A was 0.796, while the meanproportion allocated to treatment B was 0.204, which maybe observed in Figure 2g. Interestingly, the mean numberat which the treatment allocation switched from B to Adecreased from 37.749 using µ B = 1 to 18.156 using µ B =3 . When ω t = 0 . one may see in Figure 2h the meanproportion of allocation values to treatment A was 0.753,while the mean proportion allocated to treatment B was0.247. This led to a the mean number necessary to switchfrom treatment B to treatment A to decrease from 41.281 at µ B = 1 to 24.450 using µ B = 3 . Lastly, when ω t = 0 . the mean proportion of allocation values to treatment A was0.610, while the mean proportion allocated to treatment B was 0.390, which may be observed in Figure 2i. Once againthe mean number at which the treatment allocation switchedfrom B to A decreased from 46.730 using µ B = 1 to 39.185using µ B = 3 , however, this value is slightly higher thanwhen using ω t = 0 . .Finally, the output was analyzed when µ B = 5 . Whenusing ω t = 0 . the mean proportion of allocation values totreatment A was 0.892, while the mean proportion allocatedto treatment B was 0.108, which may be observed inFigure 2m. The mean number at which treatment allocationwent from B to A was 8.052, which is much lower that themean values for ω t = 0 . when using µ B = ω t was decreased to 0.01, the mean proportion of allocationvalues to treatment A was 0.832, while the mean proportionallocated to treatment B was 0.168, which may be observedin Figure 2n. When using ω t = 0 . , the mean number atwhich treatment allocation switched from A to B increasedfrom 8.052 to 15.209, which represents approximately twicethe needed patient budget. Lastly, when the value for ω t was decreased to 0.001 the mean proportion of allocationvalues to treatment A was 0.669, while the mean proportionallocated to treatment B was 0.331, which may be observedin Figure 2o. However, here the mean number at whichtreatment allocation switched from B to A increased from15.209 to 33.538. This represents not only more than doublethe patient budget needed when going from ω t = 0 . to ω t = 0 . , but a 4 times increase when going from ω t =0 . to ω t = 0 . It appears clear that as the mean value for treatment B µ B increases, the mean allocation probabilities also increaseto higher convergent values. Likewise, the mean number ofallocations necessary to switch from treatment B to treatmentA decreases as µ B increases. Yet this impact is counteractedby increasing the certainty around ω t . Thus increasing timevariability between times t i − and t i , indicates a largernecessary patient budget required to detect switching fromtreatment B to treatment A.Figure 2 Here titled compweightA Stopping Rule
In an effort to keep this model fully Bayesian, a poweranalysis was conducted using a Bayes Factor, and the 95%credible intervals along with the medians were calculated.Determination of an appropriate Bayes Factor value has beendescribed in , who indicate a Bayes Factor greater than 100indicates Decisive evidence against the null hypothesis of nodifference.However, use the opposite notation for the Bayes Factor,whereby the null hypothesis is in the numerator yielding p ( H | ( D )) = P ( D | H ) P ( H )) P ( D | H ) p ( H ) + P ( D | H ) p ( H ) (8)whereby they have the null hypothesis in the numerator andthis leads to the Bayes Factor BF = P ( D | H ) P ( D | H ) (9)which leads to their suggestion that a Bayes Factorless than provides decisive evidence against the nullhypothesis and in favor of the alternative hypothesis. The Prepared using sagej.cls ee and Boone (a) µ B = 1 ,ω t = 0 . (b) µ B = 1 ,ω t = 0 . (c) µ B = 1 ,ω t = 0 . (d) µ B = 2 ,ω t = 0 . (e) µ B = 2 ,ω t = 0 . (f) µ B = 2 ,ω t = 0 . (g) µ B = 3 ,ω t = 0 . (h) µ B = 3 ,ω t = 0 . (i) µ B = 3 ,ω t = 0 . (j) µ B = 4 ,ω t = 0 . (k) µ B = 4 ,ω t = 0 . (l) µ B = 4 ,ω t = 0 . (m) µ B = 5 ,ω t = 0 . (n) µ B = 5 ,ω t = 0 . (o) µ B = 5 ,ω t = 0 . Figure 2.
Comparison of Weight Allocation proportions for ω t = 0 . , . and . and µ B = 1 , , , , and C t B = 0 . with bars representing the uncertainty across simulations. Bayes Factor was calculated using the Bayesian Two SampleT-Test discussed in . They define the Bayes Two Sample TTest as BF = T ν ( t | , T ν ( t | n δ λ, n δ σ δ ) (10)The notation of was chosen as the more appropriatenotation, and a stopping criterion was chosen to be a BayesFactor of , to provide “decisive evidence” and supporttowards the effective treatment. Any significant Bayes factorindicated a 100 times more likely chance the allocation hadswitched. Likewise, any indecisive Bayes Factor indicatedthe switch to the better treatment had not occurred. Thebold numbers represent the Bayes Factor calculated at thebudget size N = 100 The values in parenthesis in Table 3and Table 4 represent the median and 95% credible intervalvalues required to switch treatments.Using µ B = 1 and ω t = 0 . and c t B = 0 . it canbe seen the median switch occurs at 52 (95% credibleinterval 29, 88) with a decisive Bayes Factor value 0.009,indicating this was 100 times more likely to have switchedto the favorable treatment. However, when ω t = 0 . themedian switch occurs at 90 (95% credible interval 61, 100) Table 3.
Non Covariate Budget Allocation N using µ B = 1 , , ( Q . , Q . , Q . ) P ( N ≥ ) . Italicized values indicateNoteworthy Bayes Factor µ B C t ω t (28, 39, 59), (28, 37, 56), (64, 80, 100), (58, 72, 97.025), (100, 100, 100), (75.975, 89.5, 100), (26, 45, 83), (27, 36, 57), (64.975, 80, 100), (58, 72, 94), (100, 100, 100), (98, 100, 100), (28, 40, 60), (28, 37, 54), (65, 80, 100), (59, 72, 95.025), (100, 100, 100), (99, 100, 100), with a indecisive Bayes Factor 0.327, indicating at N = 100 the switch to the favorable treatment had not yet occurred.Finally, when ω t = 0 . , all quantiles were 100, with anindecisive Bayes Factor = 1.000 thereby indicating the moreeffective treatment had not yet been detected at N = 100 andno switching had occurred.Using µ B = 3 and ω t = 0 . and c t B = 0 . medianswitch occurs at 37 (95% credible interval 28, 54) with adecisive Bayes Factor 0.002, indicating this was 100 times Prepared using sagej.cls
Journal Title XX(X)
Table 4.
Budget Allocation N using µ B = 4 , ( Q . , Q . , Q . ) P ( N ≥ ) . Italicized values indicate NoteworthyBayes Factor µ B C t ω t (31, 43, 100), (44, 56.5, 82.025), (57, 69, 88.025), (31, 42, 100), (45, 56, 77), (80, 89, 100), (32, 42, 100), (45, 55, 74), (80, 89, 100), more likely to have switched to the favorable treatment.However, when ω t = 0 . the median switch occurs at72 (95% credible interval 59, 95.025) with a indecisiveBayes Factor 0.014, indicating at N = 100 the switch tothe favorable treatment had not yet occurred. Finally, when ω t = 0 . , the median switch occurs at 100 (95% credibleinterval 99, 100) with a indecisive Bayes Factor 0.973, alsoindicating at N = 100 the switch to the favorable treatmenthad not yet occurred.Lastly, using µ B = 5 and ω t = 0 . and c t B = 0 . the median switch occurs at 42 (95% credible interval 32,100) with a indecisive Bayes Factor 0.104, indicating at N = 100 the switch to the favorable treatment had notyet occurred. However, when ω t = 0 . the median switchoccurs at 55 (95% credible interval 44, 74) with a decisiveBayes factor value 0.001 indicating this was 100 timesmore likely to have switched to the favorable treatment.Lastly, when ω t = 0 . median switching value was 89(95% credible interval 80, 100) with an indecisive BayesFactor value of 0.060, suggesting the switch to the favorabletreatment had not occurred at N = 100 .A careful examination of the remaining combinationsindicates that for µ B =
1, 2, and 3 the only decisive BayesFactors ω t = 0 . , although the Bayes Factor does appearto diminish in these cases when ω t = 0 . , yet it remainsindecisive. Likewise, at ω t = 0 . , the Bayes Factors arehighly indecisive. However, when analyzing µ B = 4 theBayes Factors for ω t = ω t = 0 . is indecisive. Interestingly, µ B = 4 thescenario for ω t = 0 . is the only decisive Bayes Factor.The behavior of these suggests if one wishes to investigatethe impact of a smaller mean and seek definitive results, it isbest to have lower certainty about the between time behaviorand use ω t = 0 . , however, for the larger means a bit morecertainty about between time variance ω t = 0 . should beused to detect a decisive difference. Conclusion
Modern computational power has aided researchers bydecreasing the amount of time necessary to run largesimulations or large computationally difficult problemswhich may arise from when using Bayesian methods.Studies such as Bayesian adaptive designs in clinical trialbenefit from this increased computational power througha decreased completion time, yet some Bayesian adaptive designs remain time consuming. The current applicationof the DLM to random allocation models illustrates itsbenefit through both greatly reduced allocation time andin decreased allocation size necessary to determine themost appropriate treatment. Likewise the correspondingsensitivity analysis illustrates the differing model behaviorsand allocation proportions which one may expect to seewhen using the DLM to allocate patients to treatments.Finally the power analysis conducted provides users theability to determine the proportion of available patientbudget they may wish to use to determine appropriatestopping criterion. This should greatly reduce the number ofineffective treatment allocations and begin allowing the mosteffective treatment to be applied in a more timely mannerthrough a smaller patient budget size. However, the currentapplication focuses only on random allocation models withno covariates therefore, the impact of a covariate suchas gender or smoker was not included in this article andis something which will be addressed in a future article.Likewise, the possibility of a multi arm study is somethingwhich could be addressed in future work to determine ifa particular treatment allocation can be removed from thestudy entirely. Additional future works may also includeexamining the Bayes factor stopping criterion from asurvival analysis standpoint.
References
1. Sabo RT (2014) Adaptive allocation for binary outcomes usingdecreasingly informative priors. Journal of biopharmaceuticalstatistics 24(3): 569578.2. Donahue EE (2020) Natural Lead-in Approaches to Response-Adaptive Allocation in Clinical Trials. PhD dissertation,Virginia Commonwealth University.3. Harrison J and West M (1999) Bayesian Forecasting &Dynamic Models. Springer.4. Zelen M (1969) Play the winner rule and the controlled clinicaltrial. Journal of the American Statistical Association 64(325):131146.5. Ivanova A (2003) A play-the-winner-type urn design withreduced variability. Metrika 58(1): 113.6. Thompson WR (1933) On the likelihood that one unknownprobability exceeds another in view of the evidence of twosamples. Biometrika 25(3/4): 285294.7. Anscombe F (1963) Sequential medical trials. Journal of theAmerican Statistical Association 58(302): 365383.8. Colton T (1963) A model for selecting one of two medicaltreatments. Journal of the American Statistical Association58(302): 388400.9. Robbins H (1952) Some aspects of the sequential design ofexperiments. Bulletin of the American Mathematical Society58(5): 527535.10. Rosenberger WF (1999) Randomized play-the-winner clinicaltrials: review and recommendations. Controlled Clinical Trials20(4): 328342.11. Wei L and Durham S (1978) The randomized play-the-winnerrule in medical trials. Journal of the American StatisticalAssociation 73(364): 840843.12. Wei L et al. (1979) The generalized polyas urn design forsequential medical trials. The Annals of Statistics 7(2): 291