The pollen and the electron: a study in randomness
aa r X i v : . [ phy s i c s . pop - ph ] A ug The Pollen and the Electron: A Study in RandomnessPriyanka Giri and Tejinder P. Singh
Istituto Nazionale di Fisica Nucleare, Pisa 56127, ItalyTata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India [email protected], [email protected]
ABSTRACT
The random motion of a pollen grain in a glass of water is only apparently so. It results fromcoarse-graining an underlying deterministic motion - that of the molecules of water colliding withthe grain. Not observing degrees of freedom on smaller scales can make deterministic evolutionappear indeterministic on larger scales. In this essay we attempt to make the case that quantum in-determinism arises in an analogous manner, from coarse-graining a deterministic (but non-unitary)evolution at the Planck scale. The underlying evolution is described by the theory of trace dynam-ics, which is a deterministic matrix dynamics from which quantum theory and its indeterminismare emergent. One consequence of the theory is the Karolyhazy uncertainty relation, which impliesa universal upper bound to the speed of computing, as noted also by other researchers.
I. THE POLLEN AND THE ELECTRON
When you look at a pollen grain in a glass of water under a microscope, the grain exhibitsrandom movement. This is the famous Brownian motion. You would be forgiven for thinking thatthis random motion of small particles suspended in a fluid is a law of nature. Something thatis beyond the scope of Newtonian mechanics. But of course physicists have found out that thisapparent randomness is a consequence of our ignorance. The molecules of water are colliding withthe pollen in accordance with Newton’s laws, but because of inevitable statistical fluctuations inthe number of molecules hitting the grain, the motion of the grain appears stochastic. Randomnessis a consequence of coarse-graining, and of not examining the perfectly deterministic motion onmolecular resolution scales. The quantitative derivation from atomic theory, of observed parametersof Brownian motion, fully supports this inference.Now sample this. A beam of electrons is passing through a Stern-Gerlach apparatus, one ata time. Each electron has been carefully prepared to have its spin aligned say forty-five degreesto the + z direction. And we want to measure the spin of the electron along the ± z axis. Thisis what the experimentalist finds. Some electrons register their spin as + z and some register it1s − z . The outcome is unpredictable and random, but the outcomes are found to obey the Bornprobability distribution. Well, the electron was evolving according to a perfectly deterministic law(the Schrodinger equation, or more precisely, the Dirac equation). Moreover the Stern-Gerlachapparatus is itself made of elementary particles which obey deterministic quantum mechanics.Where then does the randomness come from, and where do the probabilities come from? Quantummechanics has no answer to this question. Could it be that, like the case of the pollen grain in water,the randomness is a consequence of our ignorance of some underlying microscopic dynamics? Oris randomness a fundamental property of the quantum measurement process, not to be questionedany further? Is the quantum mechanical description of nature like how we describe water as athermodynamic fluid, and is there a deeper deterministic microscopic theory underlying QM, sameway as atomic theory underlies the fluid that is water. Or is QM the exact ultimate dynamical lawof nature? Physicists have struggled with this question ever since quantum theory was discovered.At the heart of the conundrum is the quantum linear superposition principle, which asserts thata quantum system prepared in a superposition of two or more eigenstates of an observable staysin that superposition for an infinite time. Until and unless the quantum system meets a classicalmeasuring apparatus, when superposition is broken, and the quantum system randomly ‘collapses’to one or the other of the superposed states. Is this randomness fundamental? Or is it a result ofcoarse-graining an underlying deterministic theory - the analog of the molecular resolution of thefluid that is water. Here we would do well to remember that quantum superposition has been testedin the laboratory only upto objects made of about 25,000 elementary particles. Does superpositionhold for objects larger than this? Maybe yes, maybe no. We don’t know. But we know for surethat for macroscopic objects, made of say 10 particles or more, such as chairs, tables, starsand galaxies, superposition does not hold. Even though a chair is made of particles which bythemselves obey superposition, the superposition vanishes when a large number of particles arebound together. Strange! And what classifies as large? 10 particles, or 10 ? We don’t know!In the 1980s, three Italian physicists, Ghirardi, Rimini and Weber, and an American, PhilPearle, put forth a beautiful explanation [1, 2] to the above conundrum. They said, let us modifyQM slightly. Instead of saying that a superposition of two states of a particle lasts forever, letus assume that it lasts for a very large, but finite time. Say for a time T equal to the age of theuniverse: T ∼ s. After a mean life-time T , the superposition spontaneously and randomlycollapses to one of the many superposed states. This tiny change to QM is enough to take ushome. For now if two particles were entangled together, the superposition would collapse if eitherone of them collapses, taking the other particle with it. So that the superposition life-time is now2alved to T /
2. If three particles are entangled then life-time is
T /
3, and so on. If N particles areentangled, superposition life-time is down to T /N , and if N is as large as 10 , like say in a chair,this life-time is as small as 10 − s. Too small to be easily observable. Such an elegant solutionto what happens in the Stern-Gerlach apparatus! Microscopic superpositions are very long-lived;but because of entanglement and spontaneous collapse, macroscopic superpositions are extremelyshort-lived.This so-called theory of spontaneous localisation is currently being tested for, in laboratoriesaround the world. Dynamical randomness has been introduced into what was earlier a deterministicquantum mechanics. The electron is behaving like the pollen in water - only, the random hits areextremely rare for an electron [hence superposition is long-lived]. But if the electron is replaced bya larger object such as a chair made of many, many particles, the hits become very very frequent[superposition is extremely short-lived]. But wait a minute. The pollen is being randomly hit bymolecules of water. Who is doing the random hitting, when the electron meets the Stern-Gerlachmeasuring apparatus, and who is hitting the chair, to keep it classical?! The answer is profound;it takes us to the deepest reaches of space-time, the Planck scale, where lengths are as small as10 − cm, and times are as small as 10 − s. II. ATOMS OF SPACE-TIME-MATTER: PREDICTABILITY REGAINED
We well know that the gravitational effects of bodies are described by Newton’s inverse squarelaw of gravitation. Or, in the relativistic case, by Einstein’s general theory of relativity. But theselaws are for classical bodies. What is the gravitational effect of an electron, say when it is in asuperposed state, having just passed through the two slits in a double slit interference experiment?How to describe gravity when quantum superpositions are present?A quantum particle in a superposition of states is delocalised; it is wavy and in a sense iseverywhere. Its gravitational effect is also everywhere. There is then no meaning to distinguishingthe particle from its gravitation. The source and the field become one and the same. And ifthe universe consisted entirely of such quantum particles [no classical bodies present] it is thenno longer meaningful to talk of classical space-time. For, space is that which is between classicalbodies, and time is that which is between classical events.In such a situation, we talk of ‘atoms’ of space-time-matter [STM] [3]. An electron together withits gravitation is an STM atom - the STM electron. It is described by a matrix, whose elementsare Grassmann numbers. Grassmann numbers anti-commute with each other; the square of every3rassmann number is zero. Every STM matrix can be written as a sum of a bosonic matrix anda fermionic matrix. In a bosonic matrix, the elements are even-grade, being made of product ofeven number of Grassmann elements. The bosonic matrix describes the would-be-gravity part ofthe STM atom. The fermionic matrix has elements which are odd-grade Grassmann, and describesthe would-be-matter part of the STM atom, e.g. the electron.At the Planck scale, the dynamics of these matrix-valued STM atoms is a matrix dynamics [4].But this dynamics is not quantum theory. Nor is there any longer a classical space-time. Rather,these STM atoms live in a Hilbert space, endowed with an algebra which can be mapped to anon-commutative geometry [as in the programme of Alain Connes and collaborators [5]]. Sucha non-commutative algebra comes naturally equipped with a (reversible) time parameter, knownas Connes time. The dynamics is simple to picture, as follows: Consider a classical mechanicalsystem described by a set of configuration variables and canonical momenta, and a Lagrangian,from which the equations of motion arise. Now, raise each real-number valued dynamical variableto the status of a matrix (equivalently operator). The Lagrangian itself becomes a matrix valuedpolynomial; take the matrix trace of this polynomial. Define this trace Lagrangian (a c-number) asthe new Lagrangian, and its integral over Connes time as the new action. Also, raise each space-time point to the status of an operator - this is the essence of non-commutative geometry: thegeometric degrees of freedom no longer commute with each other. A space-time operator togetherwith the matrix-valued matter variable define the STM atom (a Grassmann matrix), and leadto a natural and elegant action principle. The dynamical degrees of freedom do not commute,and in fact obey arbitrary time-dependent commutation relations. The variation of the actionwith respect to the matrix variables gives equations of motion for the STM atom, which evolve inHilbert space with respect to Connes time. Very significantly, the Hamiltonian of the STM atomis not self-adjoint; but has a tiny anti-self-adjoint part. A large collection of STM atoms, togetherwith their dynamics, defines the fundamental universe [6–8].
This dynamics is deterministic andtime-reversible. Predictability is regained at the Planck scale !However, in the laboratory, we do not observe this Planck-scale matrix dynamics, wherein rapidvariations occur over Planck time scales. Rather we observe a coarse-grained dynamics, at muchlower energy scales; equivalently over coarse-grained time intervals much larger than Planck time.This averaging is in the same spirit in which averaging the molecular dynamics of many manymolecules of water defines the thermodynamic properties of the fluid that is water. Except than,now the averaging is not over many STM atoms, but over the many Planck times that occur indefining one coarse-grained Connes time instant/interval. In other words, we want to know what4s the mean motion of an STM atom, after the rapid Planck scale variations have been smoothedout. Beautifully so, as long as the anti-self-adjoint part of the STM Hamiltonian can be neglected,this mean dynamics is the same as that given by quantum theory! There is still no space-time, butthe commutation relations satisfied by the averaged matrix variables are now those of quantumtheory. The averaged STM atom is akin to the pollen, and the underlying rapid variations onthe Planck scale are akin to the molecules of water which push the pollen around. Under certaincircumstances, which we now describe, these rapid variations become significant and disrupt themean motion. These rapid variations provide the sought for random hits, negligible for a singleSTM atom, but crucial when many STM atoms get entangled.When a sufficiently large number of STM atoms get entangled, an effective length scale associ-ated with the entangled system goes below Planck length, the imaginary part of the Hamiltonianbecomes significant, and the coarse-graining approximation leading to emergent quantum theorybreaks down. The individual sub-Planck motion of each STM atom tries to pull the mean-fieldentangled system its own way. These pulls are inevitably random, because there are so many STMatoms in the entanglement. These random pulls are the equivalent of the molecules of water thatpush the pollen around. These extremely frequent random hits result in spontaneous localisationand classicality of the fermionic (matter) part, accompanied by the emergence of a classical space-time geometry obeying the laws of classical general relativity. When an electron passes though theStern-Gerlach apparatus, its exact time of arrival at the apparatus, down to Planck scale resolution,is crucial. This arrival time decides which STM atom’s hit (from the apparatus) comes into play,and determines which state the electron will evolve to. The randomness of the time of arrival of theelectron makes a perfectly deterministic (though non-local) underlying dynamics appear random.The electron meeting a measuring apparatus is precisely like the pollen in a glass of water, in so faras determinism and randomness are concerned. But the Planck scale space-time-matter foam looksextremely different from the classical space-time and quantum matter fields we are accustomed to.One far-reaching consequence of the STM matrix dynamics is that it predicts space-time to beholographic. The theory predicts [9, 10] that if one were to use a measuring apparatus to measurea length L , there will always be a minimum uncertainty ∆ L in this measurement, given by(∆ L ) ∼ L P L (1)where L P is Planck length. This defines the smallest possible fundamental volume inside of a spatialregion of size L , implying that the number of information units grows as L / (∆ L ) ∼ ( L/L P ) .5his indeed is the holographic principle, i.e. the amount of information in a region increases, notas its volume, but as its area. This same principle also leads us to a derivation of the Bekenstein-Hawking entropy of a black hole (one-fourth the area of the black hole) from the microstates ofthe STM atoms which constitute the black hole [11]. This holographic inference also implies anupper bound on the ability of a computer to compute! Furthermore, the spontaneous localisationof a sufficiently large number of entangled STM atoms necessarily results in the formation of ablack hole - that simplest and most beautiful of classical objects, characterised only by their mass,charge, and angular momentum. And if one tries to make a probing device which can go sub-Planckian and experience the deterministic matrix dynamics at play there, the device necessarilybecomes a black hole. It cannot communicate the knowledge of deterministic dynamics to theoutside universe. It is as if the STM atom has two pristine states: one being the matter-dominatedstate (i.e. the quantum electron), and the other being the gravity dominated state (the black hole).The quantum electron and the classical black-hole are dual states of each other - the former is theultimate particle, and the latter is the ultimate computer. III. LIMITS TO COMPUTABILITY: BLACK HOLE AS THE ULTIMATE LOW-ENERGYCOMPUTER
Analogous to the uncertainty relation for lengths, the STM matrix dynamics also predicts anuncertainty relation for measurements of time. If a device is used to measure a time interval T ,there will be a minimum resolution / uncertainty ∆ t , given by(∆ t ) ∼ t P T (2)where t P is Planck time, defined by t P = G ¯ h/c , and numerically equal to 10 − s. This lowerlimit sets a bound on the speed of a computer. No computer can ever complete one computationalstep in a time less than ∆ t . If the computer runs for a time T , the memory space K availablefor computation is of the order K ∼ T / ∆ t ∼ (∆ t/t P ) . This implies that K/ (∆ t ) ∼ t − P ∼ s − is a universal constant. One could try to increase a computer’s computing power by makingit run longer (higher T ), but that reduces its computing speed ν ≡ / ∆ t , hence this universalbound. No computer can be so long-lasting and so efficient as to beat this bound on Kν . Forcomparison, our laptops perform 10 operations / s . This universal bound has also been derivedby other researchers earlier, using semiclassical heuristic arguments [12, 13]. However, ours is the6rst rigorous derivation stemming from quantum foundations and quantum gravity. There is anupper bound on computability because a chair cannot be in more than one place at the same time!The Bekenstein-Hawking entropy of a black hole is far, far higher than the Boltzmann entropyfor normal systems of comparable mass. This entropy can be equated to the Shannon entropy oneassociates with information, because the entropy comes from coarse-graining over the microstatesof STM atoms. Since a high entropy means a high amount of information hidden away, a blackhole is the most efficient information storage device. The number of bits being given by ( L/L P ) ,as we saw above, where L is the linear extent of the black hole. If L is one centimetre, this gives10 storage bits per cubic cm. This would make a black hole as the ultimate computer. Applyingthe above uncertainty relation to a black hole, and taking L ∼ GM/c to be the size of the blackhole, where M is the mass of the black hole, and assuming that computational speed is determinedby c ∆ t = L , we get that the life-time of the black hole computer is T ∼ G m / ¯ hc = t P ( m/m P ) .Here m P is Planck mass. Remarkably, this also happens to be the time-scale over which a blackhole disappears as a result of Hawking evaporation! This consistency of arguments establishes afundamental connection between quantum unpredictability and limits to computability. It is easyto check that the black hole evaporation time satisfies the computability bound T ν ∼ t − P . Thus acomputer made with the same qualities as a black hole would be the ultimate low-energy computer.In order to treat the black hole as a computer, it has to pass the Turing completeness test: isthe system a Turing machine or not? The system should be able to simulate the Turing machineirrespective of runtime and memory use. Black holes too can act as a Turing machine under certainlimitations like any other physical computer. The tape for the Turing machine is the black hole’scontents itself. And, the requirement for inextensible tape can be achieved by increasing the massof the black hole. The external observer will move in order to shift the tape and read the outputvia Hawking radiation. Finally, there exists a set of instructions which form a Turing-completelanguage from these physical components: the state of a position on the tape may be changed byirradiating that particle with light, the head may be moved, and information may be read by thehead from the Hawking radiation [14].The above considerations are at low energies, outside the black hole, and hence approximate.The picture changes dramatically if we enter the black hole or approach Planck energies, wherespace-time is lost. 7 V. PREDICTABLE QUANTUM COMPUTING AT THE PLANCK SCALE
At the Planck scale, we have a deterministic matrix dynamics of atoms of space-time-matter.These evolve with respect to Connes time τ . If ever a computer were to be made at the Planckscale, these STM atoms would be the entities to make it from. It would be a quantum computeralright, and still a Turing machine, but with a difference. Recall that the matrix dynamics,though deterministic, is non-unitary. And there is no classical space-time. So we can make aquantum computer by superposing states of one or more STM atoms, but this superposition willnot last forever. Depending on how many STM atoms are entangled in the superposition, therewill be a deterministic decay to one of the states, sooner or later, according to Connes time. Thebeauty of this quantum computer is that when a measurement is made, the outcome is predictable,not random nor probabilistic. The predictable nature of such outcomes gets rid of errors thatmight be associated with the stochastic nature of the outcome (collapse of the wave function)in a conventional quantum computer. And it is decidedly advantageous to have a completelypredictable quantum computer, rather than one whose final outcomes are unpredictable becauseof the notorious quantum measurement problem.How might one realise such a Planck scale computer? One way is to send the observer inside theblack hole, all the way to the classical black hole space-time singularity. In our matrix dynamicsthere is no such singularity though, it having been replaced by the finite dynamics of STM atoms.But our poor observer would nonetheless be crushed to oblivion in these hostile environs. Wemay instead conjure up a Maxwell’s demon, who watches and manipulates these STM atoms,treats their matrix dynamics as an initial value problem, and quantum computes with them. Thepredictable nature of the outcomes makes this a kind of hyper-computation, going beyond thereach of classical and quantum computers as we understand them at present. Sadly though, nosuch demon can communicate the results of such computations to his human friends outside theblack hole. And the reason is illuminating!While at it, we point out that in our theory there is no such thing as the black hole informationloss paradox. The conventional statement of the paradox is that an initial quantum state of amatter field in the vicinity of the black hole, evolves unitarily according to quantum theory, andgives rise to thermal Hawking radiation. Complete evaporation of the black hole would convertthe initially pure quantum state into the mixed state that Hawking radiation is, thus violatingunitarity of quantum theory. For us though, this is not how it works. Let us treat as a full systemthe quantum matter field and the classical black hole. The very process of black hole formation8s non-unitary, it having resulted from the spontaneous localisation of an enormous collectionof entangled STM atoms. And we know that spontaneous localisation results from non-unitaryevolution: the associated length scale to which localisation takes place is given by L S = L eff /L P .Here, L eff = L/N is the effective Compton wavelength of an object made of N entangled particles,each having a Compton wavelength L . If L eff is less than Planck length, spontaneous localisationresults in a black hole, and this requires that the object should at least be as massive as Planckmass. Now, this process, though non-unitary, is deterministic. The information about black holeformation is coded in the anti-self-adjoint part of the fermionic Hamiltonian of entangled STMatoms. Upon Hawking evaporation, this information is not lost. It is present in the evaporatedradiation, but at sub-Planckian length scales. To detect this correlation of entanglements in theHawking radiation one will have to probe the radiation at sub-Planck scales, but that will againresult in the probe becoming another new black hole!Nonetheless, in spite of black holes turning up all over the place, it is possible to make a fullypredictable quantum computer using our matrix dynamics, in the laboratory, at least in princi-ple. The computer and the apparatus that measures the outcome are the two sub-systems of acombined deterministic system with the condition that the total mass is less than Planck mass.So that at all events a black hole formation is avoided. To make the quantum computer, a set ofentangled STM atoms is employed, with the initial conditions of the matrix dynamics [i.e. initialvalues of matrix components] precisely known. Then the quantum computation part proceeds justas in a conventional quantum computer, noting that the number of qubits is small enough that thespontaneous localisation lifetime is much longer than the duration of the computation. When thetime comes to measure the output (which will be one of the matrix components), the system inter-acts (deterministically) with a much larger collection of STM atoms (a second quantum system).This step is analogous to an electron arriving at the photographic plate (measuring apparatus) ina double slit interference experiment. Except that, now the plate is replaced by a large entangledquantum system with total mass such that the non-unitary evolution becomes significant, andspontaneous localisation sets in rapidly, on a measurable time scale. The quantum superpositionpresent in the quantum computer will decay, deterministically, to a predictable outcome, which canbe programmed algorithmically, knowing the rules of the matrix dynamics. We have a quantumTuring machine with predictability.Predictability, or the lack of it, and computability, or the lack of it, are not absolute givens.These properties are determined by the physical laws of nature. We have shown that fundamentally,nature is deterministic and predictable. Einstein was right about this, even though he was wrong9n hoping that the physical world is local. Quantum unpredictability is only a consequence of ourignorance of the world at Planck scale, much the same way as to why the random motion of a pollengrain in a glass of water is only apparently unpredictable. These new developments impact on howwe think about computability, and have implications for the future of computers. Who knows,future developments in physics might lead to re-thinking on undecidability and uncomputabilityas well? Mathematical theorems are based on axioms. These axioms often tend to reflect ourperceptions of the physical world, and these latter change with our understanding of physicaltheories. REFERENCES [1] Gian Carlo. Ghirardi, Alberto Rimini, and Tullio Weber, “Unified dynamics for microscopic andmacroscopic systems,” Phys. Rev. D , 470–491 (1986).[2] Gian Carlo Ghirardi, Philip Pearle, and Alberto Rimini, “Markov processes in Hilbert space andcontinuous spontaneous localization of systems of identical particles,” Phys. Rev. A , 78–89 (1990).[3] Maithresh Palemkota and Tejinder P. Singh, “Proposal for a new qantum theory of gravity III: Equa-tions for quantum gravity, and the origin of spontaneous localisation,” Zeitschrift f¨ur NaturforschungA , 143 (2019 DOI:10.1515/zna-2019-0267 arXiv:1908.04309).[4] Stephen L. Adler, Quantum theory as an emergent phenomenon (Cambridge University Press, Cam-bridge, 2004).[5] A. Connes, “Visions in mathematics - gafa 2000 special volume, part ii,” (Springer, 2000) Chap. Non-commutative geometry 2000, pp. 481 Eds. N. Alon and J. Bourgain and A. Connes and M. Gromovand V. Milman, arXiv:math/0011193.[6] Tejinder P. Singh, “Spontaneous quantum gravity,” arXiv:1912.03266v2 (2019).[7] Tejinder P. Singh, “From quantum foundations to spontaneous quantum gravity: an overview of thenew theory,” arXiv:1909.06340 [gr-qc] , to appear in Zeitschrift fur Naturforschung A (2020).[8] Tejinder P. Singh, “Nature does not play dice on the Planck scale,” arXiv:2005.06427 , Int. J. Mod.Phys. to appear (2020).[9] Tejinder P. Singh, “Proposal for a new qantum theory of gravity v: Karolyhazy uncertainty relation,planck scale foam, and holography,” arXiv:1910.06350 (2019).[10] Tejinder P. Singh, “Dark energy as a large scale quantum gravitational phenomenon,” Mod. Phy. Letts.A , 2050195 arXiv:1911.02955 DOI:https://doi.org/10.1142/S0217732320501953 (2020).[11] Maithresh Palemkota and Tejinder P. Singh, “Black hole entropy from trace dynamics and non-commutative geomnetry,” arXiv:1909.02434v2 [gr-qc] (2019).[12] Y. Jack Ng, “Entropy and gravitation: From black hole computers to dark energy and dark matter,” ntropy , 1035 (2019).[13] Seth Lloyd, “Ultimate physical limits to computation,” Nature , 1047 (2000).[14] G. R. Andrews III, “Black hole as a model of computation,” Results in Physics , 102188 (2019)., 102188 (2019).