DDraft version August 28, 2020
Typeset using L A TEX modern style in AASTeX62
A BAYESIAN APPROACH TO THE SIMULATION ARGUMENT
David Kipping
1, 2 Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027, USA Center for Computational Astophysics, Flatiron Institute, 162 5th Av., New York, NY 10010, USA
ABSTRACTThe Simulation Argument posed by Bostrom (2003) suggests that we may be livinginside a sophisticated computer simulation. If posthuman civilizations eventuallyhave both the capability and desire to generate such Bostrom-like simulations, thenthe number of simulated realities would greatly exceed the one base reality, ostensiblyindicating a high probability that we do not live in said base reality. In this work,it is argued that since the hypothesis that such simulations are technically possibleremains unproven, then statistical calculations need to consider not just the numberof state spaces, but the intrinsic model uncertainty. This is achievable through aBayesian treatment of the problem, which is presented here. Using Bayesian modelaveraging, it is shown that the probability that we are sims is in fact less than 50%,tending towards that value in the limit of an infinite number of simulations. Thisresult is broadly indifferent as to whether one conditions upon the fact that humanityhas not yet birthed such simulations, or ignore it. As argued elsewhere, it is foundthat if humanity does start producing such simulations, then this would radicallyshift the odds and make it very probable that we are in fact sims.
Keywords: simulation argument — Bayesian inference
Corresponding author: David [email protected] a r X i v : . [ phy s i c s . pop - ph ] A ug Kipping INTRODUCTIONDo we live inside a computer simulation? Skepticism about our perceptions of realityhave existed for centuries, such as the “Buttlerfly Dream” in
Zhuangzi or Plato’s Cave.But the development of ever more sophisticated computers in the modern era has ledto a resurgence of interest in the possibility that what we perceive as reality may infact be an illusion, simulated in some inaccessible reality above us. Bostrom (2003)formalized this possibility in his simulation argument, suggesting that one of threedistinct propositions must be true.1. “The fraction of human-level civilizations that reach a posthuman stage (thatis, one capable of running high-fidelity ancestor simulations) is very close tozero”, or2. “The fraction of posthuman civilizations that are interested in running simula-tions of their evolutionary history, or variations thereof, is very close to zero”,or3. “The fraction of all people with our kind of experiences that are living in asimulation is very close to one”.The simulation argument has grown in public attention in recent years, in partdue to well-known figures such as Elon Musk expressing support for the idea, withstatements such as “theres a billion to one chance were living in base reality” (Solon2016). Perhaps as a consequence of this, the media has often described the idea as notjust a possibility but in fact a high probability (e.g. Wall (2018); Alexander (2020)),which equates to the position of Bostrom (2003) if one rejects propositions 1 and 2.However, this conditional remains unproven, and thus propositions 1 and 2 remainviable and consistent with our knowledge and experience.The simulation argument is not without counter-argument. One approach to coun-tering the idea is to ask whether a Universe-level simulation is even possible givenour understanding of the laws of nature, in other words advocating for proposition1 (Beane 2014; Ringel & Kovrizhin 2017; Mithcell 2020). For example, Ringel &Kovrizhin (2017) argue that simulating quantum systems is beyond the scope ofphysical plausibility. This physical argument quickly runs into a more metaphysicalobstacle though if we concede that the possibility that our observations and under-standing of physics may in fact be simulated. In such a case, our knowledge of physicsis wholly local to the simulation and may have no real bearing on the constraints thataffect a parent reality, whose rules and limitations may be entirely different. Moreover,such a detailed quantum-level simulation may not even be necessary to convincinglyemulate reality. Reality could be rendered in real-time locally to deceive the inhabi-tants, rather than attempting to generate the entire system at once. Indeed, quantum Although deception may not even necessary, since the sims have no experience/knowledge ofbase reality, they cannot even judge plainly unnatural phenomenon as unphysical.
HE BAYESIAN SIMULATION ARGUMENT H P . As the last of three mutuallyexclusive propositions, proposition 3 necessarily requires that propositions 1 and 2are false, making H P false. We dub this alternative hypothesis as H S (= ¯ H P ), thehypothesis that Bostrom-like simulations are run by posthuman civilizations.Certainly, if we reject H P outright, then H S would be true by deduction and thusthe probability that we are the base civilization is small via proposition 3. But thisis a clearly presumptive approach, unless one had some unambiguous evidence thatcould fully exclude H P . A rigorous statistical treatment should weigh the hypothesesappropriately and assign probabilities which concede this possibility - to acknowledgeour ignorance. This is accomplished through a Bayesian framework of probability,which is presented in what follows. A STATISTICAL ANALYSIS OF THE SIMULATION ARGUMENTAs argued in the introduction, an evaluation of the probability that we live in asimulated reality is best tackled from a Bayesian perspective. The key to Bayesianstatistics is Bayes’ theorem, which relates conditional probabilities to one another.We note that arguments based on conditional probability theory have been madepreviously, such as Weatherson (2003). In this work, a deeper Bayesian approach issought using the methods of model selection and model averaging, in order to moredirectly evaluate a quantitive measure of the probability that we live in a simulation.To accomplish this, it is necessary to first establish results pertaining to the like-lihood of observing reality as we perceive it (our data) for each hypothesis underconsideration (in our case H S and H P ).2.1. Dreams within Dreams
We begin with the hypothesis H S , which describes proposition 3. Let us considerthat there exists a base civilization, which is responsible for creating a suite of λ Kipping
Figure 1.
An illustrative depiction of a hypothetical hierarchical framework of simulatedrealities embedded spawned from a base civilization. simulated realities. This base civilization is referred to as representing generation g = 1 and its “daughter” simulations as g = 2.Each of these daughter simulations has a probability p of itself going on to create asuite simulations within its simulated reality, dreams within dreams. These parous re-alities produce a g = 3 generation, which can itself then create more simulations, andso on. This creates a hierarchy of simulations, with the base civilization - representingthe only non-simulated reality - sits at the top, as depicted in Figure 1.This work treats this hierarchy as representing the ensemble of simulated realitiesthat will ever exist, over all times. It is suggested that it’s more constructive to posethe problem in this way, because within the simulated realities, time itself is a con-struct. It doesn’t really make much sense to talk about the “passage of time” withinthese realities, since they may not align to any real world chronological definitions.They may occur almost instantaneously, played out at some scaled version of time, orsomething in-between with the simulation paused, sped-up, slowed-down at arbitrarypoints.Given that our hierarchy represents the ensemble of all realities that will ever exist,one might wonder how deep the rabbit hole goes. We suggest that there should existsome limit to how deep the generations can indeed go, some maximum value for g ,denoted by G . This is motivated by the fact the base civilization is truly responsiblefor simulating all of the daughters beneath it, and it presumably has some finitecomputational limit at its disposal. By spreading this computational power amongst λ second-generation daughters, each of those simulations necessarily has 1 /λ lesscomputational resources than the base civilization devoted to creating simulations.In fact, a second-generation civilization has even less capacity than this, since somefinite fraction of this resource is being used to generate the reality around it, besidesfrom computers within that reality. More generally, generation g has a computational HE BAYESIAN SIMULATION ARGUMENT < λ − ( g − less than that which the base civilization devotes to simulationwork.Given that each generation has less computational power than the last, one mightsuggest that this implies that λ and p should decrease as g increases. This is certainlypossible, but it’s not strictly necessary - p and λ could remain approximately constantwith respect to g with the simulations just becoming coarser in fidelity at each level,smaller in volume, cruder in detail. This might be expected if the sims are modeledupon the parent, with similar motivations and judgement. Despite this, even at somevery deep generation, the volume and fidelity may be perfectly sufficient to emulatewhat we would recognize as reality. For this reason, it is argued here that there isn’ta good justification for invoking a variable model for λ and p , which would only serveto add more complexity than warranted given our state of knowledge of the system.Nevertheless, the finite computational power will impose some limit on g , denoted by G . Accordingly, from the perspective of the last generation, it is technically impossibleto ever build a computer capable of simulating any kind reality where conscious beingsreside. Their simulations would instead be limited to more simplified programs, surelyimpressive still, but not sufficient to create beings of the same conscious, self-awareexperience of reality that we enjoy.Before proceeding, we briefly mention that although what follows is a statisticalcalculation, the model is itself deterministic rather than probabilistic. For exampleeach parent simulation spawns the same number of simulations. More realistically,we should expect the number to be a random variate drawn from some underlyingdistribution. Whilst it may be interesting to calculate that more mathematicallychallenging case, we argue here that it’s somewhat unnecessary as even this simpleargument will be sufficient given the extremely limited knowledge about the detailsof the simulated realities, and again we favour invoking as simple a model as tenablegiven our great lack of knowledge about simulated realities.2.2. Simulation Counting
It is straight-forward to work through the first few generations and evaluate thenumber of simulations in play. The second generation ( g = 2) will contain n = λ simulations, of which pλ will themselves be parous. If each of these pλ simulationsyield λ simulations themselves, then one arrives at n = pλ simulations in the thirdgeneration. Following this, one may show that in the g th generation there are n g = p g − λ g − simulations.We can now calculate the total number of simulated realities using the formula N sim = G (cid:88) g =2 p g − λ g − , = pλ − ( pλ ) G p − p λ , (1) Kipping where G is the total number of generations. Before proceeding, it is useful tocalculate a couple of useful results using this formula. First, within the scenariodescribed, the vast majority of realities are of course simulated. This really formsthe basis of the oft-quoted statement that we most likely live in a simulation. Let uswrite that the probability that we live in a simulation, given a conditional denotedby CES, and also given that hypothesis H S holds, isPr(simulated | CES , H S ) = N sim N sim + 1 . (2)What is this CES conditional? In Bayesian statistics, our inferences are alwaysconditioned upon some data/experience, otherwise one is simply left with the priorbeliefs. Since the simulation argument is one of skepticism, it plausibly opens a slip-pery slope where essentially any conditional information could be treated skeptically.For example, if our data is that X humans have lived thus far prior in the pre-simulation era, that could be challenged on the basis that our memories and recordsof how many humans have lived is also simulated, and indeed our existence could beas nascent as a few minutes ago (Russell 1921). In the face of such skepticism, theonly conditional upon which we can affirm any confidence is characterized by Ren´eDescartes’ famous “cogito, ergo sum” - CES. In this work, the conditional here reallyjust describes the fact that we are self-aware thinking beings that live in some kindof reality, whether it be real or not.We note that the number of simulations grows exponentially with each generation,such that the last generation, g = G , contains a substantial fraction of the totalnumber, given byPr( g = G | CES , H S ) = p G − λ G − N sim + 1 , = (cid:32) − pλ (cid:33)(cid:32) − ( pλ ) − G (cid:33) . (3)In the limit of large G , or really that ( pλ ) G (cid:29)
1, this becomeslim ( pλ ) G (cid:29) Pr( g = G |H S ) = (cid:32) − pλ (cid:33) . (4)Thus, for all pλ (cid:29)
1, most realities reside in the lowest level of the hierarchy.2.3.
Counting Nulliparous Simulations
Let us now consider that we have access to an additional piece of information:we do not live in a reality that has spawned simulated realities, i.e. we are in a
HE BAYESIAN SIMULATION ARGUMENT H S . Underthis hypothesis, the base civilization must be parous, for if it were nulliparous thenthere would no simulated realities and propositions 1 or 2 would be in effect, whichare mutually exclusive with proposition 3 (which in turn defines H P ). Since thebase civilization is parous, then the conditional information that our existence isnulliparous would immediately establish that we are not the base civilization (underthe conditional of hypothesis H S ).One can distinguish between two forms of nulliparity. The first is simply that welive at the bottom of the hierarchy, the sewer of reality. After all, Equation (4)establishes that these base simulations make up the majority of all realities . In sucha case, the computational power available to the sentient beings within those realitieswould simply be insufficient to feasibly ever generate daughter simulations capable ofsentient thought themselves. Here, then, proposition 2 is in effect.As pointed out by Sean Carroll, this poses somewhat of a contradiction (Carroll2016). If proposition 3 is true then, then it is possible to simulate realities, and viaEquation (2) we most likely live in a simulation, and yet more via Equation (4) itis most likely a lowest level simulation, who are incapable of simulating reality. Theconclusion that we most likely live in a reality incapable of simulating reality, yet haveassumed simulating reality is possible, forms Carroll’s contradiction. We suggest herethat this contradiction can be somewhat dissolved by considering that the lowestlevel may indeed not be capable of generating their own reality simulations, but areplausibly capable of still making very detailed simulations that fall short of generatingsentience. Accordingly, they would still suggest that it might be at least possible tosimulate realities and arrive at Bostom’s trilemma all the same. This can be thought of as an example of applying Gott’s Copernican Principle (Gott 1993), ifmost realities are X , then we most likely live in an X -type reality. Kipping
The less likely possibility is that we live in some higher level, but one of the realitiesthat hasn’t produced a daughter - which could be because of either proposition 1 or2.The total number of simulated realities between level g = 2 and g = G − − p ) will yield the number of nulliparous simulations which do not reside in thelowest level, given by (1 − p ) pλ − ( pλ ) G − p − p λ . (5)Adding this to the full membership of the lowest generation of the hierarchy yieldsthe total number of nulliparous simulated realities: N nulliparous = (1 − p ) pλ − ( pλ ) G − p − p λ + p G − λ G − . (6)Accordingly, nulliparous simulations represent the following fraction of all realitiesPr(nulliparous |H S ) = N nulliparous N sim + 1 , = λ − λ + p ( pλ − λ [( pλ ) G − p (1 + (1 − p ) λ )] , (7)where one can see that in the limit of large G this just becomeslim G →∞ Pr(nulliparous |H S ) = λ − λ . (8)2.4. The Physical Hypothesis
We have now derived the necessary results to evaluate hypothesis H S , but whenevaluating models in Bayesian statistics it is always necessary to compare it to somealternative(s) - in our case H P . In this hypothesis, H P , the probability that we arenulliparous is unity by construction of the hypothesis’ definition:Pr(nulliparous |H P ) = 1 . (9)Similarly, it’s trivial to also write thatPr(CES |H P ) = 1 . (10) HE BAYESIAN SIMULATION ARGUMENT
Bayes Factors Conditioned Upon Nulliparity
Let us finally turn to evaluating a Bayes factor, the metric of Bayesian model com-parison, between the two models. In what follows, we use the nulliparous observationas a piece of information to condition our inference upon, but will relax this after toexplore its impact.In general, we can write the odds ratio between hypotheses H S and H P , conditionedupon some data D , as O S : P = Pr( H S |D )Pr( H P |D ) , = Pr( D|H S )Pr( D|H P ) (cid:124) (cid:123)(cid:122) (cid:125) Bayes factor
Pr( H S )Pr( H P ) . (11)The prior ratio is generally set to unity for models with no a-priori preferencebetween them, such that the odds ratio equals the Bayes factor. This is sometimesdubbed the “Principle of Indifference”, argued for by Pierre-Simon Laplace, and canbe thought of as a vague prior. In our case, the “data” we leverage is that we arenulliparous when it comes to simulating realities. We may thus write out the Bayesfactor in the above using Equations (8) & (9) to givePr(nulliparous |H S )Pr(nulliparous |H P ) = λ − λ . (12)Since λ can be an arbitrarily large number, then this implies the Bayes factor is closeto unity. In other words, it’s approximately just as likely that hypothesis H S is trueas the physical hypothesis, given the fact we live in a nulliparous reality. However,since λ is always a finite number, then in fact the Bayes factor is <
1, which meansthat there is a slight preference for the physical hypothesis.2.6.
Understanding the transition from near-certainty to ambiguity
When we condition our inference on the fact that we are a nulliparous reality, andemploy a simple but instructive model describing a hierarchical simulated reality, wefind that the Bayes factor is close to unity. In other words, there is no statisticalpreference for the simulation hypothesis over the null hypothesis of a physical reality.So what changed in the statistical reasoning presented here versus the more com-monly quoted conclusion that we are statistically very likely to live in a simulatedreality? After all, this is a rather dramatic turn around in conclusion given that thesimulation hypothesis has conventionally been framed as a statistical argument andthat is the line of reasoning used in this work.There are two modifications to our thinking here than are not usually described inarguments regarding the simulation hypothesis. The first is that we have included0
Kipping this extra conditional information of nulliparity. The second is that we have usedBayesian statistics. So we briefly consider these in turn to evaluate where the argu-ment changed. 2.6.1.
Neglecting our nulliparity
Our existence as a nulliparous reality has been used as the data upon which ourBayesian inference is conditioned, but let’s now ignore that data and repeat the Bayesfactor calculation without it, to see how the conclusions change. In the absence of thisinformation, what other information are we going to condition our inference upon?The only real “data” left is CES.Before we can write down the revised Bayes factor, we first need to ask, whatis the probability of finding ourselves in a reality under the simulation hypothesisi.e. Pr(CES |H S )? Of course the answer is one, all simulations in the hierarchy arerealities from the perspective of the inhabitants. This conditional was implicitlypresent in the previous inference too but now we explicitly write it out since there isan absence of any other information. Similarly, for the physical hypothesis, we havePr(CES |H P ) = 1.This means that the Bayes factor is straight-forwardly:Pr(CES |H S )Pr(CES |H P ) = 1 . (13)If we compare this to Equation (12), it’s almost identical - there too we obtainednearly even odds between the two hypotheses. Therefore, this reveals that our mod-ification of including the nulliparous information is not responsible for the revisedconclusion of ∼ Bayesian treatment of probabilities that must be responsible.2.6.2.
Negating Bayesian model comparison
To demonstrate this, let’s see if we can recover the often claimed conclusion thatstatistically we are more likely to live a simulated reality by sheer numbers. This isstraight-forward to see when one operates under the tacit assumption that the H S istrue. If we assert that is true, then the vast majority of realities are indeed simulated.But the fallacy of this argument is that we have already assumed it’s correct, whereasthe Bayes factors derived earlier compare the hypothesis that it is/it is not true. Thisis the key difference driving the radically different conclusions.To show this, let’s now treat H S as a fixed conditional. We can no longer ask if thesimulation hypothesis, as defined by H S is true, since it is asserted as so. Instead, weask what is the probability that g = 1 (we are the first generation in the hierarchy)given that we exist? Pr( g = 1 | CES , H S ) = 1 N sim + 1 (14) HE BAYESIAN SIMULATION ARGUMENT H S , then this simply yields Pr( g = 1 | nulliparous , H S ) = 0.2.7. Bayesian Model Averaging with the “Cogito, Ergo Sum” Conditional
Thus far we have evaluated i] the probability that we are the base reality whenproposition 3/hypothesis H S is true (a very small number); ii] the probability thatwe are in the base reality when H P is true (which is trivially 1); and iii] the Bayesfactor between hypotheses H S and H P ,The latter doesn’t quite provide a direct answer to the question as to whether welive inside a simulation though, since one of the realities embedded within H S is real- namely the base reality. However, ideally we would combine these three results toevaluate the probability that g = 1 marginalized over our uncertainty about whichhypothesis is correct.This can actually be formalized through the use of Bayesian model averaging. Let’ssay we wish to evaluate the probability that g = 1 (i.e. we are the first generation)as we did in Equation (14), but now we wish to relax the conditional assumptionmade earlier that H S is assumed to be true. Instead, we can calculate the probabilitythat g = 1 for both hypotheses weighted by their model evidence in their favour,known as Bayesian model averaging. By doing so, we incorporate our ignoranceabout which model is correct. This essentially looks like a discrete marginalizationover hypothesis-space:Pr( g = 1 | CES) =Pr( g = 1 | CES , H S )Pr( H S | CES)+Pr( g = 1 | CES , H P )Pr( H P | CES) . (15)We may now use results from earlier, including Equation (14), to re-write this asPr( g = 1 | CES) = Pr( H S | CES) N sim + 1 + Pr( H P | CES) . (16)From Equation (13), we have Pr(CES |H S )Pr(CES |H P ) = 1 , (17)and so using Bayes’ theorem this becomes2 Kipping
Pr( H S | CES)Pr( H P | CES) Pr( H S )Pr( H P ) = 1 . (18)To make progress, we must assign the prior hypothesis probabilities, which is typi-cally simply set to unity as an uninformative choice, givingPr( H S | CES) − Pr( H P | CES) = 0 . (19)We also exploit the fact that the sum of all probabilities must equal one, such thatPr( H S | CES) + Pr( H P | CES) = 1 . (20)Simultaneously solving the last two equations givesPr( H S | CES) = , Pr( H P | CES) = . (21)And now finally plugging this back into Equation (16) givesPr( g = 1 | CES) = 12 + 12( N sim + 1) . (22)From the above, we have that the probability that we live in the base reality, g = 1,is one-half plus some additional term which depends on the number of simulationsin the simulation hypothesis, N sim . Since N sim ≥
0, then Pr( g = 1 | CES) ≤ g = 1 | CES) > for all N sim , asymptotically tendingtowards one half in the limit of large N sim . On this basis, it is in fact more likelythat we live in the g = 1 base reality than a simulation, although it may be only veryslightly more preferable depending on one’s assumptions for N sim .2.8. Bayesian Model Averaging with the Nulliparous Conditional
For completion, we will now repeat the previous subsection but replace the condi-tional “cogito ergo sum” with the nulliparous case.Pr( g = 1 | nulliparous) =Pr( g = 1 | nulliparous , H S )Pr( H S | nulliparous)+Pr( g = 1 | nulliparous , H P )Pr( H P | nulliparous) (23)In this case the likelihoods are binary, giving HE BAYESIAN SIMULATION ARGUMENT g = 1 | nulliparous) =0 × Pr( H S | nulliparous) + 1 × Pr( H P | nulliparous) . (24)The Bayes factor between the two models, from Equation (12), can be written (inthe limit of large G ) as Pr( H S | nulliparous)Pr( H P | nulliparous) Pr( H S )Pr( H P ) = λ − λ , (25)which again we simplify by invoking even a-priori odds between the models λ Pr( H S | nulliparous) = ( λ − H P | nulliparous) . (26)We combine this with the fact that sum of the probabilities equals one, as before,to write that Pr( g = 1 | nulliparous) =Pr( H P | nulliparous) , = 12 − λ − . (27)Note that in the simulation hypothesis, within which λ finds its definition, thatthe λ term is ≥
1. Examination of our formula indeed reveals, as before that, < Pr( g = 1 | nulliparous) ≤
1, with the formula asymptotically tending towards a half(but always remaining greater than it) for large λ . As also with before then, theconclusion is that we are more likely to be living in the g = 1 reality, althoughperhaps only with slight preference.2.9. What if we were parous?
Although we are presently not a parous reality, it is interesting to consider how theresults derived thus far would change if tomorrow we began producing Bostrom-likesimulations. For hypothesis H P , it is simple to write thatPr(parous |H P ) = 0 . (28)Through Bayesian model averaging, we have that the probability of g = 1, with thecondition of being parous, is given byPr( g = 1 | parous) =Pr( g = 1 | parous , H S )Pr( H S | parous)+Pr( g = 1 | parous , H P )Pr( H P | parous) . (29)4 Kipping
The second row goes to zero since Pr( H P | parous) ∝ Pr(parous |H P ) = 0 via Equa-tion (28). Pr( g = 1 | parous) =Pr( g = 1 | parous , H S )Pr( H S | parous) (30)Since Pr( H P | parous) = 0, then Pr( H S | parous) = 1 by requirement that the proba-bilities sum to one, and soPr( g = 1 | parous) =Pr( g = 1 | parous , H S ) . (31)Under hypothesis H S , we have already calculated that a fraction ( λ − /λ arenulliparous via Equation (8). Accordingly, one minus this are parous, which equals1 /λ . The total number of parous realities is therefore N sim /λ . Only one of these isthe base reality, and so we havePr( g = 1 | parous) = λN sim . (32)For large G , N sim (cid:29) λ and thus this probability approaches zero. Accordingly, if webecome a parous reality, in other we start producing Bostrom-like simulations, theprobability that we live in a simulated reality radically shifts from just below one-halfto just approaching zero. DISCUSSIONIn this work, we have divided the three propositions of Bostrom (2003) into twohypotheses: one where simulated realities are produced ( H S ), and one where they arenot ( H P ). Comparing the models with Bayesian statistical methods, it is found thatthat the Bayes factor is approximately unity, with a slight preference towards H P .Whilst the Bayes factor can be objectively stated without the need to assign any pri-ors, the odds ratio between the two models depends on the prior model probabilities,Pr( H S ) / Pr( H P ). A standard choice is to assume all models are a-priori as likely aseach other, but this could be challenged as being too generous to model H S , on thebasis that it is an intrinsically far more complex model.If one goes further and assigns a value to the ratio of the prior model probabil-ities, then one can use Bayesian model averaging to marginalize over the models,weighted by their posterior probabilities. If one does not penalize the model H S forits complexity and simply assigns even a-priori odds, then it is still found that theprobability we live in base reality - after marginalizing over the model uncertainties- is still not the favored outcome, with a probability less than 50%. As the numberof simulations grows very large, this probability tends towards 50%, and thus it isargued here that the most generous probability that can be assigned to the idea thatwe live inside a simulation is one half. HE BAYESIAN SIMULATION ARGUMENT
Kipping
REFERENCES
The EuropeanPhysical Journal A , , 148-161. doi:10.1140/epja/i2014-14148-0Bostrom, N. (1999). The doomsdayargument is alive and kicking. Mind , , 539-551. doi:10.1093/mind/108.431.539Bostrom, N. (2003). Are We Living in aComputer Simulation?. PhilosophicalQuarterly , Phil. Trans. Roy. Soc. , A310 , 347-363. doi:10.1098/rsta.1983.0096Gott, R. (1993). Implications of theCopernican principle for our futureprospects.
Nature , , 315-319. doi:10.1038/363315a0Korb, K. B. & Oliver, J. J. (1998). Arefutation of the doomsday argument. Mind , , 403-410. doi:10.1093/mind/107.426.403Lampton, M. (2020, Feb 29). Doomsday:A Response to Simpson’s SecondQuestion [Article]. Retrieved fromhttps://arxiv.org/abs/2003.00132 Mithcell, J. B. O. (2020). We are probablynot Sims. Science and Christian Belief , , 45-62. uri: 10023/19794Poundstone, W. (2019), The DoomsdayCalculation: How an Equation thatPredicts the Future Is TransformingEverything We Know About Life andthe Universe . New York, NY: Little,Brown SparkRichmond, A. M. (2016). Why DoomsdayArguments are Better than SimulationArguments.
Ratio , , 221-238. doi:10.1111/rati.12135Ringel, Z. & Kovrizhin, D. (2017).Quantized gravitational responses, thesign problem, and quantum complexity. Sci. Adv. , , e1701758 doi:10.1126/sciadv.1701758Russell, B. (1921). The Analysis of Mind
The Philosophical Quarterly ,53