EEmergent deterministic systems ∗ Ian T. Durham † Department of Physics, Saint Anselm College, Manchester, NH 03102 (Dated: July 5, 2018)According to quantum theory, randomness is a fundamental property of the uni-verse yet classical physics is mostly deterministic. In this article I show that it ispossible for deterministic systems to arise from random ones and discuss the impli-cations of this for the concept of free will.
What does chance ever do for us? —William Paley (1743-1805)
I. CLASSIFICATION OF PROCESS
What role does chance play in the evolution of the universe? Until the development ofquantum mechanics, the general consensus was that what we perceived as chance was reallyjust a manifestation of our lack of complete knowledge of a situation. In a letter to MaxBorn in December of 1926, Einstein wrote of the new quantum mechanics, “The theorysays a lot, but does not really bring us any closer to the secret of the ‘old one.’ I, at anyrate, am convinced that He does not throw dice.” [6]. Yet all attempts to develop a deter-ministic alternative to quantum mechanics have thus far failed. At the most fundamentallevel, the universe appears to be decidedly random. How is it, then, that the ordered andintentional world of our daily lives arise from this randomness? Intuitively, we tend to thinkof randomness as being synonymous with unpredictability. But the very nature of the term‘unpredictable’ implies agency since it implies that an agent is actively making a prediction.I am not interested here in whether or not the universe requires an agent to make sense ofit. I’m interested in understanding if it is possible for intentionality to arise naturally fromsomething more fundamental and less ordered.While we typically think of the universe as being a collection of ‘things’—particles, fields,baseballs, elephants—it is the processes that these things participate in that make the uni-verse interesting. A process need not require an active agent. A process is simply a changein the state of something. This definition of process is similar to the concept of a test from operational probabilistic theories (see [2] for a discussion of such theories). Similarto such theories, then, we can define a deterministic process as being one for which theoutcome can be predicted with certainty. To put it another way, a deterministic process isone for which there is only a single possible outcome. A random process must then be an ∗ This article was submitted under the title ‘God’s Dice and Einstein’s Solids’ to the Spring 2017 FQXiessay contest. † [email protected] It would seem necessary to define ‘state’ here, but given the limitations on length we will leave that foranother essay. a r X i v : . [ qu a n t - ph ] M a r unpredictable change. That is, a process for which there is more than one possible outcomeis said to be random if all possible outcomes are equally likely to occur. It’s important tonote that there is a difference between the likelihood of making an accurate prediction of theoutcome of a process and one’s confidence in that prediction. Confidence can be quantifiedas a number between 0 (no confidence) and 1 (perfect confidence). Likelihood is just theprobability that a given outcome will occur for a given process. The outcomes of randomprocesses are all equally likely to occur and, in such cases, one’s confidence in accuratelypredicting the correct outcome should be zero.It is worth noting here the difference between determinism and causality. The concepts areoften incorrectly confused. Indeed, D’Ariano, Manessi, and Perinotti have argued that thisconfusion has led to misinterpretations of the nature of EPR correlations [4]. Determinismand randomness represent the extremes of predictability. A fully causal theory can have both.To put it another way, in a fully causal theory a random process must still have a cause. Or,as D’Ariano, Manessi, and Perinotti have shown, it is possible to have deterministic processes without causality. The difference is that determinism is related to the outcomes of a givenprocess whereas causality is related to the actual occurrence of that process regardless ofwhether it is deterministic or random.Of course, many real processes are neither random nor deterministic. There may be morethan one possible outcome for a process, but those outcomes may not be equally likely tooccur. What do we call such processes? For a two-outcome process whose outcomes have a51% and 49% likelihood of occurrence respectively, one might be tempted to refer to it as‘nearly random.’ On the other hand, if those same likelihoods were 99% and 1% respectively,one might be tempted to say the process was ‘nearly deterministic.’ But what if they were80% and 20% or 60% and 40%? At what point do we stop referring to a process as ‘nearlydeterministic’ or ‘nearly random’ ? We need less arbitrary language. One suggestion wouldbe to refer to such in between cases as ‘probabilistic.’ But this is misleading since we canstill assign probabilities to the outcomes of random and deterministic processes; they are noless probabilistic than any other process.A solution presents itself if we consider the aggregate, long-term behavior of such pro-cesses. As an example, consider that casinos set the odds on games of craps—a game thatis neither random nor deterministic—under the assumption that they will make money onthese games in the long run and (crucially) that the amount of money they will make isreliably predictable within some acceptable range of error. So the process of rolling a pairof dice (which is all that craps is) is at least partially deterministic to a casino. But nowconsider a game with two outcomes, A and B , whose respective probabilities of occurring are50.5% and 49.5%. Could a casino set up a system by which they could, within some rangeof error, make a long-term profit on this game, even if that profit is very small? Suppose itcosts $100 to play this game and that a player receives $102 if outcome B occurs but nothingif outcome A occurs. Suppose also that, on average, the casino expects 10,000 people toplay this game each year. That means that, on average, they will pay out $504,900 a yearin winnings but keep $505,000 a year in fees leaving them with $100 in profit (on average).Though this is ridiculously low, the crucial point is that it is not zero . As low as it is, thecasinos can still budget for it and, in the long run, can expect to make a profit on it. Thefact is that something like this can be done for any process that is not random. Non-randomprocesses are always predictable in the aggregate, though the sample size may need to be ex-ceedingly large. Since deterministic processes are perfectly predictable for every occurrence,it makes sense to refer to processes that are predictable only in the aggregate as partially deterministic since they do contain an certain deterministic element to them. II. A SIMPLE EXAMPLE
The aforementioned game of craps simply involves betting on the outcome of a roll of apair of dice. The game is as old as dice themselves and serves as a useful example of how somelevel of partial determinism can arise from randomness. It also provides a straightforwardmethod for introducing a few additional terms. Those wishing to delve more deeply intothis subject are encouraged to dive into Refs. [1, 9, 10].Consider a fair, six-sided die. As a fair die, it is assumed that upon rolling this die, all ofthe six outcomes are equally likely. In fact casinos paint the dots on their dice, rather thanuse the usual divots because the divots are not equally distributed and thus throw off thecenter-of-mass which changes the long-term probabilities. While real dice are never trulyrandom, a so-called ‘fair die’ is considered to be a theoretical ideal and is thus random.Now consider a roll of two fair dice as in a game of craps. Since they are both fairdice, each outcome on each individual die is equally likely. We are also assuming thatwe can easily distinguish the dice from one another e.g. perhaps one is blue and one isred. Considered together, then, there are thirty-six possible outcomes—configurations—toa single, simultaneous roll of both. Since we can distinguish between the two dice, if theroll produces a four on the blue die and a three on the red die, this is an entirely differentoutcome from a three on the blue die and a four on the red one. Each of these configurationsis referred to as a microstate .But in craps, as in other games that use a pair of dice, we are often interested in the sum of the numbers on the faces. Thus we typically consider the roll of a pair of dice as givingus a number between two and twelve. We call this number the macrostate . If we look ateach of the thirty-six microstates, we’ll see that they can be grouped according to whichmacrostate they produce. The number of microstates that will produce a given macrostateis known as the multiplicity and is given the symbol Ω. But note that the multiplicities ofthe macrostates for the roll of a pair of fair dice are not all equal. There are, for instance,six different combinations that can produce a roll of seven (I gave two of these six above).On the other hand, there is one and only one way to roll a two or a twelve.The probability of a given roll (i.e. macrostate) is given by the multiplicity of that rolldivided by the total multiplicity. So, for example, the probability of rolling a seven is sixdivided by thirty-six or one-sixth. Conversely, the probability of rolling a two or a twelveis one-thirty-sixth. Table I lists the microstates for each macrostate of the pair of dice,giving the multiplicity and probability of each. Though we think of this as a single roll ofa pair dice, it is really two simultaneous rolls of individual dice. Each of these individualrolls is a random process yet when they are considered together as a single roll that singleroll of the pair is partially deterministic. As should be clear from Table I this behavior isnot physical in the sense that the probabilities of the individual macrostates are due to thecombinatorics of the problem or what what might call ‘mindless’ mathematics. So a pithycounter to Einstein’s objection might be that a dice-throwing God still produces a partiallypredictable result.There is one objection to this example that is worth considering. The numbers on the They also routinely replace their dice since the sides of dice can wear unevenly. See Ref. [7].
Macrostate Microstates Ω Probability2 1 1/363 , 2 2/36=1/184 , , 3 3/36=1/125 , , , 4 4/36=1/96 , , , , 5 5/367 , , , , , 6 6/36=1/68 , , , , 5 5/369 , , , 4 4/36=1/910 , , 3 3/36=1/1211 , 2 2/36=1/1812 1 1/36Total: 36 1TABLE I: This table lists the microstates for each macrostate for a roll of a pair of six-sided dice.The multiplicity, Ω, is the total number of microstates. dice are entirely arbitrary. That is, we could have instead painted six different animals onthe faces of each die. In this case we might find it hard-pressed to identify any distinctivemacrostates other than pairs and we could eliminate the pairs by painting different animalson each die. Thus it seems as if the macrostates used in typical die rolls are entirely arbitraryin the sense that their relative import is based on a meaning that we assign to them. Thelabelling of the sides of the dice is not a fundamental property of the dice themselves. Wecan get around this problem and improve on our odds by perhaps ironically considering amodel proposed by Einstein nineteen years before his comment to Born.
III. EINSTEIN SOLIDS
In 1907 Einstein proposed a model of solids as sets of quantum oscillators. That is, eachatom in such a solid is modeled in such a way that it is allowed to oscillate in any one ofthree independent directions. Thus a solid having N oscillators would consist of N/ N oscillators and q units of energy, the multiplicity isΩ( N, q ) = (cid:18) q + N − q (cid:19) = ( q + N − q !( N − . (1)Now consider two Einstein solids that are weakly thermally coupled and approximatelyisolated from the rest of the universe. By weakly thermally coupled, I mean that the exchangeof thermal energy between them is much slower than the exchange of thermal energy amongthe atoms within each solid. This means that over sufficiently short time scales the energiesof the individual solids remain essentially fixed. Thus we can refer to the macrostate ofthe isolated two-solid system as being specified by the individual fixed values of internalenergy. (For a further discussion, see Ref. [10].) Let’s begin by considering a simple (albeitunrealistic) system. Suppose each of our two solids has three oscillators, i.e. N A = N B = 3,and the system has a total of six units of energy that can be divvied up between theoscillators. Suppose that we put all six of these units of energy into solid B . That meansthat there is only one possible configuration for the oscillators in solid A —they all containzero energy. Conversely, there are twenty-eight configurations for the oscillators in solid B according to (1). The total number of configurations for the system as a whole is just theproduct of the two and thus is also twenty-eight.Suppose that we instead put a single unit of energy into solid A with the rest going tosolid B . In this case, there are three possible configurations for solid A since the single unitof energy we’ve supplied to it could be in any one of the three oscillators. The five remainingunits of energy can be distributed in any one of twenty-one ways within solid B . But nowthe total number of configurations for the system is 3 ·
21 = 63. Table II summarizes theenergy distribution and corresponding multiplicity for this simple system and Fig. I showsa smoothed plot of the total multiplicity, Ω tot as a function of the energy q A contained insolid A . q A Ω A q B Ω B Ω tot = Ω A Ω B tot q AN A = N B = 3 q = 6 FIG. 1: This shows a smoothed plot of Ω tot as afunction of q A for the data from Table I. This tells us that the states for which the energy is more evenly balanced between the twosolids are more likely to occur because there are more possible ways to distribute the energyin such cases. This is analogous to the example given in the previous section involving a pairof fair dice. There is no intentionality on the part of the system. In addition, the systemis considered to be isolated from the rest of the universe and thus there is no environmentdriving these results. They are simply due to combinatorics. Each individual microstate ofthe combined system is assumed to be equally probable and thus the process of reachingone of these microstates from any other is completely random. It just happens that more ofthose microstates correspond to configurations in which the energy is more evenly dividedbetween the two solids. Thus the system can undergo random fluctuations about the meanand still be more likely to be found in a microstate in which the energy is roughly equallydivided between the two solids. Ω tot q AN, q ≈ a few hundred Ω tot q AN, q ≈ a few thousand (a) (b)FIG. 2: (a) This shows a plot of Ω tot as a function of q A for a pair of Einstein solids when N, q ≈ a few hundred. (b) This is the resultant plot for the solids when N, q ≈ a few thousand. But consider now what happens when we begin to scale the system up to more realisticsizes. Fig. 2a shows a plot of the total multiplicity of the system as a function of the energyin solid A when the total number of oscillators and the total number of energy units is afew hundred. Fig. 2b shows a plot of the same function when the number of oscillatorsand energy units is a few thousand. The larger our Einstein solids become, the narrowerthe peak of the multiplicity function. For realistic Einstein solids, the peak is so narrowthat only a tiny fraction of microstates have a reasonable probability of occurring. That is,random fluctuations away from equilibrium are entirely unmeasurable.It is important to keep in mind that all we have done in Fig. 2 is to scale up the Einsteinsolids. Each individual microstate remains equally probable and thus the underlying processof moving from one microstate to another is entirely random. Yet, as the system grows largerand larger, it is increasingly likely to be found in only a very small number of microstates.This means that we can make highly accurate predictions of which microstates will occurgiven some initial input data. This is a dramatically scaled up analogy to a game of craps.Though the fluctuations taking the system from one microstate to another are entirelyrandom, the macrostate is very nearly deterministic. And though I have used a decidedlyphysical example, the result is simply a consequence of the combinatorial behavior of verylarge numbers. Thus we have a situation in which a nearly deterministic process can arisefrom random processes due solely to something that is almost entirely mathematical. Inaddition, unlike the situation with the dice, we are not arbitrarily assigning meaning to themicrostates and macrostates.An obvious question is whether or not it is possible to achieve perfect determinism witha pair of Einstein solids. Certainly in the limit that N, q → ∞ the narrowness of the mul-tiplicity peak in our example becomes asymptotically thin. The limit in which a systembecomes large enough that random fluctuations away from equilibrium become unmeasur-able is known as the thermodynamic limit . In other words, at some point our partiallydeterministic system becomes indistinguishable from a fully deterministic one. Where thattransition occurs may depend on a host of factors, but the reason it is referred to as thethermodynamic limit is precisely because it is where conventional thermodynamic meth-ods of analysis—which are deterministic!—become the most useful way to understand thebehavior of a pair of solids that are in thermal contact with one another.Of course, this is just a single example from one area of physics but it serves to showthat near-perfectly deterministic macroscopic processes can arise from a very large numberof random microscopic processes due to the ‘mindless’ behavior of mathematics.
IV. IMPLICATIONS FOR FREE WILL
Let us consider a toy universe in which all microscopic processes are random and thusequally probable. The only physical constraints that we will place on this toy universe areto limit the outcomes of each microscopic process to being finite in number and to requirethat these outcomes be distinguishable from one another. Macroscopic processes in sucha toy universe would have varying levels of determinism based on the combinatorics andthe nature of the processes themselves. For example, the microstates and macrostates of apair of six-sided dice are different from the microstates and macrostates of one six-sided dieand one eight-sided die. Thus the nature of the dice dictate which processes are allowed ineach case (e.g. a roll of fourteen is not possible with a pair of six-sided dice). For our toyuniverse, we can think of any constraints as being dictated by the initial physical conditionsof the universe itself.It is worth asking, then, what it would mean for a hypothetical ‘being’ in such a universeto have free will. Free will is generally viewed as one’s ability to freely choose betweendifferent courses of action. This requires, however, that when presented with a choice, anagent can reliably predict the outcome of some process. If I am, for instance, faced with thechoice of carrots or broccoli as a vegetable side for my dinner, the essence of free will is that,free of unpredictable external factors, if I choose to have carrots I can have confidence that Iwill actually have carrots with my dinner, i.e. the carrots won’t randomly and inexplicablyturn into a potato the moment they touch my plate. The crucial but subtle difference here isthat my choice in this example is between two different processes —the process of physicallytaking carrots from my refrigerator or the process of physically taking broccoli from myrefrigerator—rather than two different outcomes of a single process. So once I have chosento carry out one or the other of these processes, I can have confidence that the multiplicityof one outcome of my chosen process is so much greater than the multiplicity of any otheroutcome that my desired result will actually occur, i.e. the probability of the most likelymacrostate not occurring is utterly unmeasurable.Of course, any beings in our hypothetical toy universe are unequivocally part of thatuniverse and thus an amalgam of random processes themselves. If the deterministic macro-scopic processes arise from microscopic random ones solely due to the combinatorics of alarge number of such microscopic processes, then it is worth asking if free will really doesexist. This is certainly a fair question, but misses the broader point. Regardless of whathappens at the most fundamental level, the concept of free will is meant to be applied tosentient beings (which are inherently not fundamental) making conscious choices about themacrostates of large-scale systems. As sentient beings we expect that free will entails ourability to freely make a choice with the confidence that a specific outcome of our chosenprocess really does occur with a high degree of probability. For that to happen certainprocesses must be at least partially deterministic if not fully so.This brings up an important distinction. There are really different levels of processes. Wecan refer to a process associated with a macrostate as a macroprocess . The constituent pro-cesses of a macroprocess would then be microprocesses . The macroprocess of simultaneouslyrolling a single pair of dice is composed of two microprocesses—the independent rolls of twoindividual dice. So the terminology refers to the level of the system and not necessarilythe size of the system or its constituents. The act of me pulling carrots out of a bin in myrefrigerator is a macroprocess that actually consists of trillions of microprocesses involvingthe neurons in my brain, the electrical signals in my neurological system, the mechanicalmotion of the refrigerator parts, etc. These in turn are all made up of further constituentprocesses all the way down to the processes involving the fundamental particles and fieldsthat constitute the material foundation of the entire system.Free will thus generally involve choices about macroprocesses with varying degrees ofconfidence. I may be highly confident that the carrots in my refrigerator won’t spontaneouslyturn into potatoes, but I’m a tad less confident that inserting the key into the ignition of mycar will turn the car on. Certainly I expect it to turn on most of the time, but it is entirelyplausible that something could go wrong and it won’t turn on. I’m even less confident whenI approach an unfamiliar intersection and don’t know which way to go. Depending on thesituation, my choice could essentially be entirely random. The key point here is that if all macroprocesses were entirely random, we wouldn’t even have the illusion of free will becauseour choices would be meaningless since they would be based entirely on guesses. So freewill requires that most macroprocesses be at least partially deterministic. But, crucially, microprocesses can still be random since their combinatorial behavior can lead to partiallydeterministic macroprocesses like the rolling of dice or the equilibrium state of two solids inthermal contact.
V. BOUNDARY CONDITIONS
There’s one final objection to this line of argument that should be addressed. It’s clearthat the emergence of determinism and free will in this model is not solely due to thecombinatorics alone. After all, the mathematics refers to something physical. As I saidbefore, the behavior of a six-sided die is different from the behavior of an eight-sided die.So at the most fundamental level there has to be something non-mathematical in order todistinguish, for example, a quark from a lepton or even the number one from the number two.But it is worth asking if the combinatorics itself can produce additional boundary conditionson the system that then further constrain its evolution. In other words, is it possible for asystem’s own internal combinatorics to change the probabilities of future macrostates?In the simple example using dice, no matter how many times we roll them, the com-binatorics alone will not change the probabilities of the macrostates. Certainly the dicecould wear down unevenly over time, but this is an external effect. But consider a pair ofEinstein solids in thermal contact as I described in the previous section. A microprocessfor such a system is the shifting of an energy unit from one oscillator to another. Thismicroprocess is fundamentally random. If we introduce a large number of energy units tosuch a system and assume it has a large number of oscillators, regardless of how those en-ergy units are initially distributed, over time the system will find itself limited to just afew possible microstates. Crucially, these random microprocesses don’t suddenly cease tooccur when the system reaches equilibrium. Energy continues to be passed around whilethe underlying microprocesses remain random, yet fluctuations away from equilibrium even-tually become unmeasurable. This is simply because a few microstates near equilibriumhave an enormously higher probability of occurring than all the other microstates. In thissense, the macrostate corresponding to equilibrium has imposed a boundary condition onfuture macroprocesses purely through combinatorics. So while the evolution toward equi-librium has no effect on the underlying microprocesses which are presumably fixed by theinescapable laws of physics, it does have an effect on the future evolution of the aggregatemacroprocesses for entirely combinatorial reasons.So while free will does not allow us to alter the laws of physics, it does act as an introducedboundary condition that can allow for a certain amount of environmental forcing on themacro-level and it seems that it is at least possible for that to happen for purely combinatorialreasons.
VI. CONCLUSION
There is little in this article that is actually speculative. Admittedly I am consideringhighly simplified systems here, but they at least demonstrate that it is possible for somethingordered and intentional to arise from the aggregate behavior of a collection of randomprocesses with no external forcing, i.e. due solely to combinatorics. The behavior of suchsystems also suggests that it is entirely possible for free will to emerge from something farless ordered. In fact both Eddington and Compton argued that the randomness of quantummechanics was a necessary condition for free will [3, 5]. On the other hand, Lloyd has arguedthat even deterministic systems can’t predict the results of their decision-making processahead of time [8]. Is free will just an illusion? Does it require randomness or does it requiredeterminism? The answers to these questions undoubtedly lie in a deeper understanding ofthe transition from quantum systems to classical ones. In this essay I have shown that theseeds of such an understanding might be found in simple combinatorics. The mindless lawsof mathematics might just be what allows the universe to evolve intentionality. At the veryleast, it is worth a deeper look.
Acknowledgments
I would like to thank Irene Antonenko for pressing me on the language of partial deter-minism. I stand by my use of the term, but Irene’s comments helped me to clarify why Iprefer it. Plus it made for a great after-dinner discussion that was enhanced by good dessertand good wine. I additionally thank our spouses and children who cleaned up around us.0 [1] Arieh Ben-Naim.
A Farewell to Entropy: Statistical Thermodynamics Based on Information .World Scientific, Singapore, 2008.[2] G Chiribella, G M DAriano, and P Perinotti. Probabilistic theories with purification.
PhysicalReview A , 81(6):062348, 2010.[3] Arthur H. Compton.
The Freedom of Man . Yale University Press, New Haven, 1935.[4] G M DAriano, F Manessi, and P Perinotti. Determinism without causality.
Physica Scripta ,2014(T163):014013, 2014.[5] Arthur S. Eddington.
The Nature of the Physical World . Cambridge University Press, Cam-bridge, 1928.[6] Albert Einstein. Letter to Max Born, 4 December 1926. In Irene Born, editor,
The Born-Einstein Letters . Walker and Company, New York, 1971.[7] E.T. Jaynes. Where do we stand on maximum entropy? In R.D. Levine and M. Tribus,editors,
The Maximum Entropy Formalism . MIT Press, Cambridge, MA, 1978.[8] Seth Lloyd. A Turing test for free will.
Philosophical Transactions of the Royal Society A ,28:3597–3610, 2012.[9] Thomas A. Moore and Daniel V. Schroeder. A different approach to introducing statisticalmechanics.
American Journal of Physics , 65:25–36, 1997.[10] Daniel V. Schroeder.