# A Bayesian Redesign of the First Probability/Statistics Course

AA Bayesian Redesign of theFirst Probability/Statistics Course

Jim AlbertDepartment of Mathematics and StatisticsBowling Green State UniversityJuly 9, 2020

Abstract

The traditional calculus-based introduction to statistical inferenceconsists of a semester of probability followed by a semester of fre-quentist inference. Cobb (2015) challenges the statistical educationcommunity to rethink the undergraduate statistics curriculum. Inparticular, he suggests that we should focus on two goals: making fun-damental concepts accessible and minimizing prerequisites to research.Using ﬁve underlying principles of Cobb, we describe a new calculus-based introduction to statistics based on simulation-based Bayesiancomputation.

At many American universities, a number of introductory statistics coursesare taught at the undergraduate level in several departments. These coursescan be divided into three general types. One type is the introductory statis-tics course taught to satisfy a quantitative literacy requirement. This coursehas a low prerequisite (typically college algebra) and is intended to introducethe student to the science of statistical practice. This course covers basictenets of exploring data, sampling distributions, and statistical inference. A1 a r X i v : . [ s t a t . O T ] J u l econd type of course is the “methodological” statistics course taught in ap-plied science departments. This course also has a relatively low prerequisite,but the focus is to introduce the student to the particular statistical methodsthat may be used in the student’s research. For example, a psychology stu-dent might be introduced to statistical inference procedures for means andproportions and regression modeling.The third type of course, the focus of this paper, is the two-semester prob-ability and statistics course taught to students in a mathematics or math-ematics and statistics department. In our department, the prerequisite ismultivariate calculus. This course provides an introduction to calculus-basedprobability and statistical inference. The text for the author’s probabilityand statistics course (taken over 40 years ago) was Mendenhall and Schaeﬀerand the current 7th edition of the text (Wackerly, Mendenhall and Schaeﬀer(2008)) remains popular. Table 1 displays the titles of the 16 chapters ofthe Wackerly et al (2008) text. The ﬁrst half of the course is a traditionalintroduction to probability including discrete, continuous and multivariatedistributions. The chapters on functions of random variables and samplingdistributions naturally lead into statistical inference. The inferential mate-rial includes point and hypothesis testing, regression models, design of ex-periments and ANOVA models, and categorical and nonparametric methods.Note that the current edition of the text includes a ﬁnal chapter introducingBayesian methods.Table 1: Outline of a traditional calculus-based probability and statisticscourseChapter Title Chapter Title1 What is Statistics? 9 Point Estimation2 Probability 10 Hypothesis Testing3 Discrete Random Variables 11 Linear Models4 Continuous Variables 12 Designing Experiments5 Multivariate Distributions 13 Analysis of Variance6 Functions of Random Variables 14 Categorical Data7 Sampling Distributions 15 Nonparametric Statistics8 Estimation 16 Bayesian Methods2 .2 Concerns About the Traditional Calculus-BasedCourse For some students, this traditional calculus-based probability/inference coursemay be the ﬁrst introduction to statistical thinking. Thinking about the po-tential audience, there are a number of concerns with the syllabus for thistraditional course as laid out in Table 1.

Introduction to modern statistics?

Looking at the chapter titles in Table 1, there seems to be a disconnectof the inferential material with modern statistics. There is little discussionof methods for exploring data and graphical representation, although theseexploration activities are important for the modern statistician. Althoughthere is some beneﬁt in discussing methods of estimation (such as maximumlikelihood and method of moments) and optimal inference (such as the con-cept of a best hypothesis test), there is little discussion of statistical learningand simulation-based inferential methods popular in modern statistics. Theinferential topics in Wackerly et al (2008) are essentially the same as thetopics discussed in the ﬁrst edition of the text.

What does a one-semester student learn?

Some of the issues described above become more relevant for the studentwho is able to take only one semester of this probability/inference course. Asemester of probability is certainly ﬁne for many purposes but the studentwill not be introduced to any data analysis or inferential procedures in thissingle semester.

The preparation for high school teachers

Many of the students in the calculus-based probability/inference course areprospective math teachers at the secondary education level. They need to betrained in concepts of probability and statistics that are described in a setof standards for each state. For example, the state of Ohio has a document“Ohio’s Learning Standards for Mathematics” that states explicitly whatskills and knowledge that students should attain at each grade level from Kthrough 12th grade. Looking at the list of skills, one sees an emphasis ontechniques of data analysis in the early grades, concepts of probability are3ntroduced in the middle grades (grades 6 through 8), and topics in statisticalinference are introduced in the ﬁnal two years of schooling. It is clear thatthe prospective math teachers need to have a solid introduction to methodsof data analysis with less attention to some of the probability and inferentialtopics shown in Table 1. It appears that these math education majors arenot well served by the traditional probability and statistics course.

A suitable second course?

Many statistics educators have realized that the traditional calculus-basedprobability/inference course does not suﬃciently cover all of the elements of“modern” statistics. So there has been some discussion about an appropri-ate second course that would follow the probability/inference course. Somepossible second courses are regression, categorical data analysis, multivari-ate statistical analysis, machine learning, nonparametric statistics, design ofexperiments, and statistical computing/computational statistics.

Elements of data science?

The traditional probability/inference course may use software in the compu-tation of probabilities or the implementation of statistical procedures, butthere is typically little attention in this course to statistical programming.In contrast, due to the availability of “big data” and the interest in datascience, the modern statistician needs to be ﬂuent in statistical program-ming. In a typical exercise, one programs using a language such as R orpython to import large datasets, perform various operations such as deﬁnenew variables, ﬁltering, arranging so that the data is in suitable form, andthen implementing a suitable statistical analysis.

The development of any new statistics course should be consistent with cur-rent thinking of the faculty dedicated to teaching statistics at the under-graduate level. Recently, the American Statistical Association (ASA) com-missioned a workgroup to formulate guidelines for undergraduate instructionin statistics. The report (Chance et al (2014)) presents the following ﬁve4rinciples that should be recognized in the development or revision of under-graduate programs in statistics. • The scientiﬁc method and the statistical problem solving cycle.Students should be exposed to all steps of the scientiﬁc method in solv-ing statistical problems. That includes formulating the question, col-lecting appropriate data to address the question, the exploration stagewhere one performs the statistical analysis, and the communication ofthe ﬁndings. • Real applicationsStudents should learn statistics through exposure to real data, that is,data that has been collected to learn about an authentic and relevantapplied problem. • Focus on problem solvingIn statistics, it is common to focus on teaching statistical procedures,but the instruction should also be on teaching general principles thatwill allow students to ask questions and learn new statistical methodsin the future. • Increasing importance of data scienceGiven the new interest in data science, statistical instruction needsto show an awareness of the easily accessibility of large datasets andsome of the opportunities in using this data in teaching principles ofstatistics. • Creative approaches to new curricular needsThe ASA believes that there should be opportunity to oﬀer statisticaleducation to a variety of programs in diﬀerent disciplines. So thereshould be ﬂexibility in terms of content, level, and instructional methoddepending on the needs of the students.

George Cobb (2015) recently wrote an inﬂuential paper that argues thatwe need to deeply rethink our undergraduate statistics curriculum from theground up. Towards this general goal, Cobb proposes the following “ﬁve5mperatives” that will hopefully help the process of creating this new cur-riculum.

Imperative 1: Flatten prerequisites

Cobb argues that the traditional course in mathematical statistics is the ﬁnalclass of a ﬁve-course sequence (Calc 1 to Calc 2 to Calc 3 to Probability toMathematics Statistics) and this really limits the enrollments in statistics. Inour consulting work, we don’t believe it is necessary to take multiple coursesin the applied discipline in order to provide statistical help. Instead, weapply a ”just in time“ approach where we learn what we need to know aboutthe applied ﬁeld to provide the service to the client. A similar just-in-timeapproach can be used in teaching a statistics course. This approach wouldencourage faculty to think about the necessary skills and concepts and thinkcreatively in the design of new courses.

Imperative 2: Seek depth

Here Cobb is saying that it is desirable to strip away the technical details sothat the student can see the fundamental concepts of the discipline. At theundergraduate level, it is desirable to communicate central concepts such asthe Central Limit Theorem in words in a simple way.

Imperative 3: Embrace computation

Since computation is such an integral part of the statistics discipline, it shouldbe introduced early in introductory classes. Given the ready availability ofnumerical methods, Cobb suggests that computation can be used to motivatestatistical concepts. We should not restrict attention to teaching statisticalprocedures with closed-form recipes. By using computation freely, the class-room can resemble applied statistical work which routinely uses computation.

Imperative 4: Exploit context

In applied data analysis, the context provides meaning to an abstract statis-tical procedure. Cobb describes standard uses of context such as interpreta-tion, motivation, and direction. In addition, he describes the building up ofknowledge from ﬁrst an illustration, then the abstraction. Cobbs illustrates6he usefulness of teaching how to recognize abstract structure in teachingexperimental design.

Imperative 5: Teach through research

Cobb states that our job as statistics instructors (using his words) “is notto prepare students to use data to answer a question that matters; our jobis to help them use data to answer a question that matters.” Teach throughresearch means that the students should be actively involved in the statisticslearning cycle. Cobbs gives diﬀerent illustrations of research-based learningat diﬀerent levels from lower-course undergraduate through ﬁrst-year gradu-ate.

Given the popularity of Bayesian thinking in applied statistics, a large num-ber of Bayesian texts are currently available. These books have diﬀerentpurposes and target audiences and one can put many of the texts in thebroad classes “introductory Bayes”, “computational Bayes”, “graduate-levelBayes”, and “applied Bayes”.The purpose of the introductory Bayes texts are to introduce statisticalinference to a broad audience assuming only knowledge of college algebra. Asecond type of Bayesian text focuses on Bayesian computational algorithmstogether with software to implement these algorithms. Other texts focus onthe use of Bayesian software such as BUGS and WinBUGS MCMC softwarefor a variety of Bayesian models. Graduate-level Bayesian texts provide abroad perspective on Bayesian modeling including detailed descriptions ofinference for single and multiparameter models. Other Bayesian texts havea more narrow focus communicating to an audience in a particular applieddiscipline. These books don’t describe the Bayesian models in mathematicaldetail, but they are good in making sense of the Bayesian procedures forapplied problems.

Several general comments can be made based on this introduction. First,there is a strong desire to redesign the traditional probability and statisticscourse for students with a calculus background. Second, there is a need7or innovation in statistics instruction as documented by the ASA workinggroup. Last, Cobb believes that a radical rethinking of statistics instruction isnecessary and focuses on ﬁve imperatives that should guide this pedagogicalinnovation.There are good reasons to introducing the Bayesian perspective at theundergraduate level. First, many people believe that the Bayesian approachprovides a more intuitive and straightforward introduction than the frequen-tist approach to statistical inference. Second, given the large growth ofBayesian applied work in recent years, it is desirable to introduce the un-dergraduate student to the some modern Bayesian applications of statisticalmethodology. Speciﬁcally it is desirable to teach this ﬁrst statistics coursefrom a Bayesian perspective. Documented by the large number of Bayesiantexts, there exists plenty of Bayesian instructional material that one canuse in the development of this text. Also there is an increasing amount ofBayesian computational resources as documented by the large number ofBayesian packages in R. (See the Bayesian Task View on CRAN.)This paper will describe the components of a Bayesian redesign of thiscalculus-based statistics course in the context of the recommended guidelinesin undergraduate statistics instruction. We revisit Cobb’s ﬁve imperativesand discuss how each of these imperatives can be addressed in the Bayesianstatistics class. Recently I had the opportunity to develop a Bayesian think-ing course for data scientists. This “Beginning Bayes” course is reviewed andit is shown how this course addresses some of the recommended guidelines instatistic instruction. Last, we describe the current status of our “Probabilityand Bayesian Modeling” text.

Here we revisit the ﬁve main principles described by Cobb (2015) in therethinking process of changing our undergraduate statistics curriculum. Inparticular, we describe how these principles are implemented in a Bayesianredesign of the ﬁrst calculus-based statistics class.8 .2 Flatten prerequisites

Traditionally, a student will take a course in Bayesian inference after he orshe has taken a probability and inference course taught from a traditional in-ferential perspective. So this graduate course in Bayes is implicitly requiringa prerequisite of a traditional inference course where the student gains somefamiliarity with statistical procedures such as t-test, ANOVA, and multipleregression. But this knowledge of traditional inference is certainly not nec-essary before a Bayesian class. Actually it could be argued that the mix ofknowledge of frequentist and Bayesian procedures can be confusing due tothe diﬀering interpretation of inferential procedures using the two paradigms.

Using discrete priors

The student does need a course providing a foundation of probability theorybefore taking a Bayesian course, but much of the Bayesian paradigm can becommunicated with a limited probability background. For example, much ofthe Bayesian material in the introductory texts Berry (1996) and Albert andRossman (2001) is based on discrete probability distributions. For example,inference about a single proportion is introduced by means of a discrete priorplaced on a set of plausible proportion values. In a similar fashion, inferenceabout a single mean (sampling variance known), is introduced by means ofa discrete prior on a set of values of the population mean.Even inferential comparison methods can be communicated by the use ofdiscrete distributions. In the comparison of proportions, one places a discreteprior on each proportion, say p and p , and assuming independence, thejoint prior is a discrete distribution over a grid of pairs of proportion values( p , p ). In the case where one has a strong belief that the proportions areequal, one can adjust the prior probabilities along the diagonal values where p = p . Inference is achieved by computing the products (likelihood timesprior) for all points and then normalizing the products to obtain the posteriorprobabilities. Figure 1 displays a graphical representation of the prior andposterior distributions for two proportions with a uniform prior and 4 outof 12 successes in the ﬁrst sample and 6 out of 12 successes in the secondsample. 9 .250.500.75 0.25 0.50 0.75 P1 P Type

P1 < P2P1 = P2P1 > P2

PROB

Prior P1 P PROB

Type

P1 < P2P1 = P2P1 > P2

Posterior

Figure 1: Graphs of prior and posterior of two proportions using a discreteprior.

Using simulation

Simulation provides another attractive “ﬂattened prerequisites” strategy inpresenting Bayesian inference. One aspect of Bayesian inference is that sum-maries of a posterior distribution are expressible as integrals which may bediﬃcult to evaluate, especially for multiparameter problems. One way toavoid the integration issue is to simulate a large number of values from theposterior distribution. Then posterior summaries are expressed as data sum-maries of the simulated posterior sample. Given that students typically havesome skills in exploring and summarize data, these same skills can be appliedto the posterior “data”.Simulation can be introduced for problems where a conjugate prior of afamiliar functional form is available. For example, using a beta prior withbinomial data leads to a beta posterior and summaries of the posterior can befound by taking a simulated sample from a beta distribution. Other prob-lems may have not have convenient conjugate priors, but Gibbs samplingcan provide a straightforward way of simulating a posterior sample by suc-cessively simulating from conditional posterior distributions. For example, in10he familiar situation where one is learning about the mean µ and variance σ from normal sampling, then assuming a prior proportional to 1 /σ theposterior density is proportional to1( σ ) n/ exp (cid:18) − σ (cid:88) ( y i − µ ) (cid:19) . In this setting, by recognizing that the distributions [ µ | σ ] and [ σ | µ ] havecommon functional forms, one can construct a Gibbs sampler based on nor-mal and inverse-gamma sampling.The Gibbs sampler can be viewed as an introduction to the class ofMarkov chain Monte Carlo (MCMC) algorithms routinely used by appliedscientists in Bayesian analyses. Once the student gets some basic under-standing of Markov chains, then it is desirable to introduce a general class ofalgorithms such as the Metropolis-Hastings random walk that can be usedfor a wide variety of inferential problems. Some introductory statistics texts introduce Bayesian inference, but muchof the discussion is devoted to a deﬁnition of the prior, likelihood, and pos-terior and the mechanics of computing the posterior and inference for somestandard examples. In the calculus-based introductory class, it is desirableto teach other components of the Bayesian paradigm.

Prior elicitation

One advantage of the Bayes perspective is the opportunity to input expertopinion by the prior distribution. So it is desirable for the course to includesome strategies for constructing one’s prior when one has substantive priorinformation. In the case where one has little prior knowledge, some discussionabout suitable noninformative or vague priors is needed.

Sensitivity of inference with respect to model assumptions

A Bayesian model consists of a choice of prior and sampling density. Theprior is an approximation to a person’s “true” prior that would be obtainedafter a long period of elicitation, and likewise a particular sampling densityis chosen for convenience. When there is doubt about the accuracy of these11pproximations, the Bayesian paradigm provides a convenient mechanismto explore the sensitivity of a particular inference to these choices of priorand sampling density. This course should demonstrate this type of Bayesiansensitivity analysis for diﬀerent inferential problems.

Model checking and model comparison

Any Bayesian inference is conditional on the model assumptions includingprior and sampling density. When there are several Bayesian models underconsideration, one can use the marginal likelihoods of the models to comparehow they ﬁt the data. Bayes factor, the ratio of the marginal likelihoods, canbe used to compare several models. For a single model, diagnostics can beperformed by exploring the posterior predictive density of a testing function.In an introduction to Bayesian thinking, one should illustrate the use ofposterior predictive checking, and also demonstrate the formal selection ofmodels by use of Bayes factors.

Hierarchical modeling

One of the Bayesian success stories is the use of hierarchical or multilevelmodeling when one wishes to simultaneously estimates parameters from sev-eral groups. In regression situation, it is common to want to combine regres-sion estimates from several groups, and hierarchical modeling is an eﬀectiveway to achieve partial pooling of the separate regression estimates. By use ofsimple examples such as the simultaneous estimation of several proportions,one can demonstrate the value of these multilevel models.

In the traditional Bayesian course, much of the material is devoted to thederivation of posterior and predictive distributions for conjugate problems.Although these derivations are helpful in understanding how the prior infor-mation and data are combined in a posterior, they don’t introduce the stu-dent to the modern MCMC simulation algorithms currently used in Bayesianresearch. It is important that the students gain some basic understanding ofsimulation-based Bayesian computation.Simulation of posterior and predictive distributions can be introduced inBayesian computations from the simplest to the most sophisticated models.12here are several advantages to the use of simulation in this context. Sum-marizing a sample of simulated draws from the posterior is analogous to thetask of summarizing a large sample of data, and so data analysis skills canbe applied to the Bayesian inference setting. It is straightforward to performinference about a transformation of the parameter vector, say h ( θ ). If onehas the sequence of simulated draws { θ ( j ) } from the posterior g ( θ | data), thesample of simulated draws { g ( θ ( j ) ) } will be distributed from the posteriorof h ( θ ). Last, simulation can be used to simulate draws from the posteriorpredictive distribution. In a Bayesian model where y is distributed from thesampling model f ( y | θ ) and θ has a prior g ( θ ), then a simulated draw fromthe predictive density is obtained by ﬁrst simulating θ from g , call the simu-lated draw θ ∗ , and then simulating y from the distribution f ( y | θ ∗ ). One useof simulating replicated samples from the posterior predictive distribution isin model checking. Discrete Bayes

To illustrate simulation in a basic setting, suppose one is interested in learn-ing about a proportion p and a discrete prior assigns probabilities { g ( p j ) } on the set { p j } . If one observes y successes in n trials, then the posteriorprobabilities are proportional to the products { g ( p j ) × p yj (1 − p j ) n − y } . Onesimulates from the posterior by sampling with replacement { p j } where thesampling probabilities are proportion to { g ( p j ) × p yj (1 − p j ) n − y } . On R thiscan be done with a single application of the sample function. Conjugate priors

As mentioned earlier, in introducing Bayesian thinking, it is helpful to de-scribe several conjugate problems where the prior and posterior have familiardistributions such as the beta, gamma, or normal. For example, if a beta( a, b )prior is assigned to a proportion p and if y successes and n − y failures are ob-served, the posterior for p is beta with updated shape parameters a ∗ = a + y and b ∗ = b + n − y . A 90% probability interval can be found by extractingquantiles from a beta( a ∗ , b ∗ ) density. Alternatively, one can obtain this in-terval by simulating, say a 1000, draws from a beta( a ∗ , b ∗ ) distribution andﬁnding sample quantiles of the simulated draws { p ( j ) } .One criticism of this simulation approach is that one is introducing er-ror by simulation, and this simulation-based probability interval is only an13pproximation to the exact probability interval found using beta quantiles.However, this simulation approach has several advantages in this conjugatesetting. Simulation is seen as a general strategy for computing posteriordistributions, and one can assess the accuracy of simulation-based poste-rior summaries by comparing the simulation summaries with the exact sum-maries. Moreover, one can illustrate the ease of computation of the posteriorof a function of the parameter, say the odds h ( p ) = p/ (1 − p ) by simplytransforming the simulated values of p from the posterior. Normal approximations

Once the student has some familiarity with the normal distribution, then byuse of the Laplace approximation (Kass and Raftery, 1995), the normal dis-tribution can be seen as a quick and convenient approximation to a posteriordistribution of several parameters. Once this approximation is developed,then one can perform posterior calculations by drawing a large sample fromthe approximate normal distribution.

Markov chain Monte Carlo methods

Once the student has been exposed to simulation as a general Bayesian com-putational tool, then MCMC can be introduced as a general method forsimulating a Markov chain in situations where the posterior distribution isin a less-tractable form. A Markov chain can be introduced in the simplesetting where one has a ﬁnite collection of states. In continuous-parametersettings, the random walk Metropolis algorithm is an attractive method forsetting up a Markov chain.

Software

A number of general-purpose software programs are available for BayesianMCMC computation such as openBUGS, JAGS, Nimble, and Stan and thestudent should be introduced to the use of one of these programs. The maintask in the use of these programs is the speciﬁcation of a script deﬁningthe Bayesian model and then the Bayesian ﬁtting is implemented by a sin-gle function that inputs the model description, the data, and any tuningparameters of the algorithm.There are several beneﬁts to the use of this general-purpose software.First, by writing the script deﬁning the full Bayesian model, the student14ets a deeper understanding of the sampling and prior components of themodel. Second, by use of this software for sophisticated models such ashierarchical models, this software lowers the bar for students to implementthese methods. The focus of the students’ work is not the computation butrather the summarization and interpretation of the MCMC output.

All of the aspects of a Bayesian analysis are communicated best throughinteresting case studies or extended examples. In a good case study, onedescribes the background of the study and the inferential or predictive prob-lems of interest. A Bayesian class should include several engaging appliedexamples where one describes the construction of the prior to represent ex-pert opinion, the development of the likelihood (which preferably does notﬁt a standard form), and the use of the posterior distribution to address thequestions of interest. Indeed one aspect of memorable Bayesian texts are theinclusion of particular examples that help in motivating the description ofthe methodology.When I think about notable Bayesian texts, I think about some of themotivating examples. In particular, Link and Barker (2009) use a number ofinteresting ecological case studies and applications. When I reﬂect on Gel-man and Hill (2007), I think about the interesting use of political examplesto illustrate multilevel modeling. McElreath (2015) also has engaging exam-ples – I use the coﬀee shop waiting time in my own workshops to illustratemultilevel modeling. Efron and Morris’ example of predicting the end of theseason batting averages for 18 players in Efron and Morris (1975) is still usedin modern texts to illustrate hierarchical modeling. Although the data is over40 years old, the popularity of this example illustrates the power of a goodexample in communicating statistical concepts.

Bayesian thinking is best learned in the concept of one’s own statistical study.In a student’s own project, he or she will think carefully about the choiceof prior that approximately represents his or her beliefs and a likelihoodfunction will be chosen for the particular type of data that is collected. Thestudent collects data. By exploration of the posterior, he or she will addressthe question of interest and compare the conclusion with the prior opinion.15 Bayesian project can be implemented in a statistics class at any level.Albert (2000) illustrates the use of a sample survey project in an introduc-tory statistics class. In this project, the student performs a survey to learnabout a particular population proportion. The student uses a discrete prioron the proportion to represent his or her opinion and after collecting data,computes the posterior on the discrete set of proportion values. This exercisedemonstrates the process of adjusting one’s opinion after observing data.Chapter 17 of Gelman and Nolan (2017) describes several activities wherethe students are engaged in Bayesian thinking. In Section 17.1, studentsare asked to construct a 50 percent subjective probability interval for thenumber of quarters in a jar. In Section 17.2, students are asked to estimatethe kidney cancer death rates for a number of counties in a particular state.The challenge is to learn about the underlying true cancer rates when onehas observed rates from counties based on diﬀerent sample sizes. Althoughthese activities are described in a classroom setting, they could be modiﬁedto create mini-projects for the students. Beginning Bayes

Course

Goals of Course

DataCamp is a company that oﬀers short courses in topics in data scienceand modeling using the R and python script languages. In a typical courserequiring approximately four hours of eﬀort, the student will view a series ofinstructional videos and get practice in applying the concepts using an on-line system using either R or python. The students who enroll in DataCampcourses come from a wide range of backgrounds. Many are employed in posi-tions that require some data science experience and they are taking speciﬁccourses to learn particular skills in data science or modeling. The generalgoal of the “Beginning Bayes” course was to introduce Bayesian thinking todata scientists assuming only a minimal background in probability. Therewas some discussion about DataCamp oﬀering a more sophisticated Bayesianregression course using Stan software and so this course was viewed as a possi-ble introductory Bayesian course that would be a prerequisite to the Bayesianregression course. Using Cobb’s terminology, this course is the ultimate “ﬂat-tened prerequisites” course requiring little mathematics preparation from thestudent. 16s in all courses oﬀered by DataCamp, the instructional module con-sists of a series of chapters, where each chapter contains a series of in-structional videos and online exercises using the R statistical system. AR package T eachBayes was written to accompany this course. The packageincludes functions for visualizing probability calculations for beta and nor-mal distributions, and functions to facilitate Bayes’ rule calculations (suchas b ayesian crank and t wo p update). Course Topics

Topic 1: Introduction to Bayesian thinking

In this ﬁrst chapter, discrete probability distributions are introduced usinga random spinner deﬁned by a set of spinner areas. When there are sev-eral plausible spinners that are spun, Bayes’ rule is used to learn about thespinner identity from the observed spinner values. In this chapter, the stu-dent gets introduced to the notions of prior, likelihood, and posterior. Thischapter concludes with an illustration of sequential learning where the poste-rior distribution on the spinners after one spin become the prior distributionbefore another spin.

Topic 2: Learning about a binomial probability

The second chapter presents the familiar problem of learning about a bino-mial probability. It starts with the use of a discrete prior using Bayes’ rule.Then a beta prior is used to represent prior knowledge about a continuous-value proportion and a special R function is used to ﬁnd the shape param-eters of the beta prior that match the knowledge of two percentiles. TheTeachBayes package is used to ﬁnd probabilities and percentiles for the betadistribution, and Bayesian interval estimates and tests are found by thesesummaries. Simulation is also introduced as an alternative way of perform-ing posterior calculations.

Topic 3: Learning about a normal mean

The third chapter describes Bayesian inference for a normal mean when thesampling variance is assumed known. This chapter follows the same basicstructure as Chapter 2. One starts with the use of a discrete prior where onespeciﬁes a list of plausible values of the mean, and one computes posterior17robabilities by Bayes’ rule. Then the normal prior is introduced, and aR function n ormal update is used to ﬁnd the posterior mean and standarddeviation of the normal posterior. Simulation is introduced to facilitate pos-terior computations about the mean, and it is also used to simulate futureobservations from the predictive distribution. Topic 4: Bayesian comparisons

The ﬁnal chapter introduces comparison of proportions, inference abouta normal mean when the sampling standard deviation is unknown, andBayesian regression. For comparing two proportions, a discrete prior overa grid is used to represent prior opinion and inference is another applicationof Bayes’ rule. Next, independent beta distributions are used to representopinion when the proportions are continuous-valued. Simulation is used tolearn about the diﬀerence in proportions. The arm package is used to illus-trate Bayesian regression calculations. The normal mean inference problemis a special case of a regression model with only a constant term, and thetwo-group model is a regression model with an indicator value for the groupcovariate. In each case one simulates from a Bayesian regression model witha noninformative prior. By use of transformations on the matrix of simulateddraws, one can perform inference for a normal percentile and a standardizedgroup eﬀect.

Probability and Bayesian Modeling

Monika Hu and I recently completed (Albert and Hu (2019)) an undergrad-uate text that is a Bayesian redesign of the calculus-based probability andstatistics sequence. Table 2 gives the table of contents of our text. I hadpreviously written (Albert (2015)) a data analysis and probability text forprospective middle-school and high school teachers of mathematics and someof the material from that text is used for the probability content of our book.As can be seen from Table 1, the probability material in Chapters 1through 6 resembles the material for a traditional probability course includingfoundations, conditional probability, discrete and continuous distributions.There are some notable omissions of content from the Wackerly et al (2008)probability content. There is relatively little discussion of distributions offunctions of random variables and there is little discussion of the common18able 2: Table of Contents of Proposed Bayesian CourseChapter Title Contents1 Probability: A Measurement interpretations of probability,of Uncertainty probability axioms, assigningprobabilities2 Counting Methods3 Conditional Probability includes Bayes’ rule4 Probability Distributions focus on coin-tossingmodels5 Continuous Distributions includes normal distributionand Central Limit Theorem6 Joint Probability discrete and continuousDistributions distributions7 Learning About a Bayes’ with discrete modelsProportion beta prior, posterior inference8 Learning About a Mean Bayes’ with discrete modelsnormal prior, inference and prediction9 Simulation by MCMC Gibbs sampler, Metropolis-HastingsJAGS software10 Bayesian Hierarchical Modeling exchangeable models formeans and proportions11 Simple Linear Regression12 Multiple Regressionand Logistic Models13 Case Studies text analysis, hierarchicalregression, latent class models19amilies of distributions with the exception of the coin-tossing distributions(binomial and negative binomial) and the normal. For obvious reasons, thereis limited discussion of sampling distributions although the Central LimitTheorem is introduced as one application of the normal distribution.Although there are applications of Bayes’ rule in the probability chap-ters, the main Bayesian inferential material begins in Chapters 7 and 8 witha discussion of inferential and prediction methods for a single binomial pro-portion and a single normal mean. The foundational elements of Bayesianare described in these two chapters including the construction of a subjec-tive prior, the computation of the likelihood and posterior distributions, andthe summarization of the posterior for diﬀerent types of inference. Predic-tive distributions are described both for predicting future data but also forimplementing model checkingSince the remaining material in the text is heavily dependent on simu-lation algorithms, Chapter 9 provides an overview of Markov Chain MonteCarlo (MCMC) algorithms with a focus on Gibbs sampling and Metropolis-Hastings algorithms. This chapter begins with an informal discussion of theMetropolis-Hastings random walk to provide some intuition into the logicunderlying the algorithm. Once the random walk algorithm is described, onecan discuss all of the implementation issues such as the choices of startingvalue and number of iterations and MCMC diagnostic methods. Once thereis reasonable conﬁdence that the MCMC sample is an approximation repre-sentation of the posterior distribution, then the text describes the use of theMCMC sample for diﬀerent types of inference.The remaining chapters (Chapters 10 through 13) use JAGS as the com-putational software for illustrating Bayesian thinking for some popular mod-els. Hierarchical modeling is introduced in Chapter 10 with a focus on simul-taneously estimating a set of normal means and a set of binomial proportions.Bayesian regression models are described in Chapters 11 and 12. Chapter11 focus on priors and posterior and predictive computations for the simplelinear regression model with a single input. Chapter 12 extends this modelto describe Bayesian multiple regression and logistic regression when the re-sponse variable is binary. Chapter 13 describes several case studies includingtext mining, simultaneously estimating a series of career trajectories using ahierarchical model, and a latent class analysis.20 eferences [1] Albert, J. (1995), “Teaching Inference About Proportions Using Bayesand Discrete Models,”

Journal of Statistics Education , vol. 3, no. 3.[2] Albert, J. (2000), “Using a Sample Survey Project to Assess the Teach-ing of Statistical Inference,”

Journal of Statistics Education , vol. 8, no.1.[3] Albert, J. and Hu, J. (2019),

Probability and Bayesian Modeling , CRCPress.[4] Albert, J. and Rossman, A. (2001),

Workshop Statistics: Discovery withData, A Bayesian Approach , Key College Publishing.[5] Albert, J. (2002), “Teaching introductory statistics from a Bayesian per-spective,”

Proceedings of the Sixth International Conference on TeachingStatistics .[6] Albert, J. (2009),

Bayesian Computation with R , second edition,Springer.[7] Albert, J. (2015),

Data Analysis and Probability for Teachers , unpub-lished manuscript.[8] Antleman, G. (1997),

Elementary Bayesian statistics , Cheltenham: Ed-ward Elgar Publishing.[9] Berry, D. A. (1996),

Statistics: A Bayesian Perspective , Duxbury Press.[10] Berry, D. A. and Lindgren, B. W. (1996),

Statistics: Theory and Meth-ods , 2nd edition, Duxbury Press.[11] Blackwell, D. (1969),

Basic Statistics,

New York: McGraw Hill.[12] Carlin, B. and Louis, T. (2008),

Bayesian Methods for Data Analysis ,3rd edition, CRC Press.[13] Chance, B., Cohen, S., Grimshaw, S., Hardin, J., Hesterburg, T., Hoerl,R., Horton, N., Malone, C., Nichols, R. and Nolan, D. (2014),

Curricu-lum Guidelines For Undergraduate Programs in Statistical Science .2114] Cobb, G. (2015), “Mere Renovation is Too Little Too Late: We Needto Rethink our Undergraduate Curriculum from the Ground Up,”

TheAmerican Statistician , 69:4, 266-282.[15] DeGroot, M. and Schervish, M. (2011),

Probability and Statistics , 4thedition, Pearson.[16] Efron, B. and Morris, C. (1975), “Data Analysis Using Stein’s Estimatorand its Generalizations,”

Journal of the American Statistical Associa-tion , 70, 311-139.[17] Gelman, A. and Nolan, D. (2017),

Teaching Statistics: A Bag of Tricks ,2nd edition, Oxford University Press.[18] Gelman, A., Carlin, J., Stern, H., Dunson, D., Vehtari, and Rubin. D.(2013),

Bayesian Data Analysis , 3rd edition, CRC Press.[19] Gelman, A. and Hill, J. (2007),

Data Analysis using Regression andMultilevel/Hierarchical Models , Oxford Press.[20] Gill, J. (2014),

Bayesian Methods: A Social and Behavioral SciencesApproach , 3rd edition, CRC Press.[21] Hoﬀ, P. (2009),

A First Course in Bayesian Statistical Methods , CRCPress.[22] Jackman, S. (2009),

Bayesian Analysis for the Social Sciences , Wiley.[23] Kass, R. and Raftery, A. (1995), “Bayes Factors”,

Journal of the Amer-ican Statistical Association,

90, 773-795.[24] Lee, P. (2012),

Bayesian Statistics: An Introduction , 4th edition, Wiley.[25] Link, W. and Barker, R. (2009),

Bayesian Inference: With EcologicalApplications , Academic Audience.[26] Lunn, D, Jackson, C, Best, N. Thomas, A., Spiegelhalter, D. (2012),

TheBUGS Book: A Practical Introduction to Bayesian Analysis , Chapmanand Hall.[27] Marin, J. and Robert, C. (2013),

Bayesian Essentials with R , 2nd edi-tion, Springer. 2228] McElreath, R. (2015),

Statistical Rethinking , CRC Press.[29] Ntzoufras, I. (2009),

Bayesian Modeling Using WinBUGS , Wiley.[30] Schmitt, S. (1969),

Measuring uncertainty: An elementary introductionto Bayesian statistics , Reading, MA: Addison-Wesley.[31] Smith, A. F. M and Gelfand, A. (1990), “Bayesian Statistics withoutTears: A SamplingResampling Perspective”,

The American Statistician ,46:2, 84-88[32] Wackerly, D, Mendenhall, W. and Schaeﬀer, R. (2008),

MathematicalStatistics with Applications , Thomas Brooks/Cole.[33] Witmer, J. (2017), “Bayes and MCMC for Undergraduates”,

The Amer-ican Statistician , Volume 71, Number 3.

Appendix A Selection of Bayesian Texts

Introductory Bayes

The purpose of the introductory Bayes texts are to introduce statistical in-ference to a broad audience assuming only knowledge of college algebra.Blackwell (1969) and Schmidt (1969) are a couple of early examples of theseintroductory Bayes texts. Much of the material in these early texts assumesthat the parameter of interest takes only a discrete collection of values. Berry(1996) and Albert and Rossman (2001) follow a similar strategy in using dis-crete models to introduce Bayesian thinking.

Computational Bayes

A second type of Bayesian text focuses on Bayesian computational algorithmstogether with software to implement these algorithms. Albert (2009) andMarin and Robert (2013) focus on computational strategies with illustrationsof these Bayesian computations using the R language. Other texts focus onthe use of Bayesian software; for example, Lunn et al (2012) and Ntzoufras(2009) illustrate the use of BUGS and WinBUGS MCMC software for a widevariety of Bayesian models. 23 raduate-Level Bayes

Graduate-level Bayesian texts provide a broad perspective on Bayesian mod-eling including detailed descriptions of inference for single and multiparam-eter models. Gelman et al (2013), Carlin and Louis (2008), and Hoﬀ (2009)are good examples of modern texts typically used at the graduate level. Gill(2014) and Jackman (2009) provide a good foundation of Bayesian theory atthe graduate level with applications to the social sciences.

Applied Bayes

Other Bayesian texts have a more narrow focus communicating to an audi-ence in a particular applied discipline. For example, Link and Barker (2009)write to an audience in applied ecology research and McElreath (2015) isaimed at researchers in the natural and social sciences. These books don’tdescribe the Bayesian models in mathematical detail, but they are good inmaking sense of the Bayesian procedures for applied problems.

Appendix B Bayesian R Packages (A PersonalList) • LearnBayes • TeachBayes • arm • rjags • rstan • rstanarm • coda • bayesplotbayesplot