Strong emergence in condensed matter physics
aa r X i v : . [ phy s i c s . h i s t - ph ] A ug Strong emergence in condensed matter physics
Barbara Drossel, Institute of Condensed Matter Physics, TU DarmstadtSeptember 4, 2019
When I was a physics student, I often had the impression that I did not re-ally understand the material that was presented. The reason was that thepresented calculations were claimed to be based on the ’fundamental theory’,usually quantum mechanics, but many of the steps that were made did notseem to really follow from the equations of quantum mechanics. These equa-tions are deterministic, linear in the wave function, and invariant under reversalof the direction of time. In contrast, the calculations presented in the classesappeared to involve concepts that are incompatible with these features. Timereversal symmetry was broken when dealing for instance with the scattering of aquantum particle at a potential: the incoming particle is assumed to be not af-fected by the potential, but the outgoing particle is. Chance is introduced whenbasing all of statistical physics on probabilities, or when transition probabilitiesbetween quantum states are calculated, as for instance in scattering theory. Thesupposedly ’simple’ Hartree-Fock theory, a so-called mean-field theory that isused for calculating approximately the quantum mechanical ground states ofmany-electron atoms, is nonlinear in the wave function. Furthermore, elementsfrom classical mechanics and quantum mechanics are often mixed, for instancewhen describing electrons as balls when explaining the origin of the electric re-sistance, or when calculating the configuration of molecules by assuming thatthe atomic nuclei have well-defined positions in space. Whether on purpose orunintentionally, many courses and textbooks made us students believe that inprinciple everything follows from a set of fundamental laws, but that in prac-tice it is unconvenient or unfeasable to do the exact calculation, and thereforeapproximations, plausible assumptions, intuitive models, and phenomenologicalarguments are made. Only years later, I slowly began to understand that myproblems had not primarily been due to a lack of talent and understanding onmy side, but that there are indeed fundamental and interesting issues behindall these questions, some of which are the subject of lively discussions in thephilosophy of physics.Probably the most important factor that made me change my views wasbecoming a physics professor and teaching these courses myself. In particularthe courses on statistical physics and condensed matter theory showed me that1ven the most advanced textbooks and research articles contain concepts, argu-ments, and steps that are a far cry from a strict derivation of the phenomenaexclusively from a set of basic axioms or mathematically expressed laws. Look-ing back to my time as a student, I wish I had been taught about the interestingphilosophical questions surrounding physics. Because we were lacking this infor-mation, I and probably many of my fellow students thought that physics is themost fundamental science, and that by learning quantum physics and particlephysics we would learn the laws that rule nature at the most fundamental level.As a consequence of this experience, I adopted in the meantime the habit ofpointing these philosophical issues out to students when presenting the coursematerial or when teaching seminars. When writing down the deterministic,time-symmetric equations of classical mechanics, electrodynamics, or quantummechanics, I address the question whether this implies that nature is indeeddeterministic or time-symmetric. When mentioning probabilities in statisti-cal mechanics, I address the question how this relates to the supposedly more’fundamental’, deterministic microscopic theories. When presenting the variousmodels and methods of condensed-matter theory, I discuss how these modelscontain a mixture of elements from quantum and classical physics. Further-more, I now have discussed the issue of emergence and reduction for a couple ofyears in a seminar that I teach during the winter semester to master studentsof physics.The following pages will explain in a more detailed manner that condensedmatter physics cannot be fully reduced to the supposedly ’fundamental’ theory,which is quantum mechanics of 10 particles. This means that many propertiesof condensed-matter systems are strongly emergent. It also means that themacroscopic properties of condensed matter systems have a top-down causalinfluence on their constitutents. In this way my contribution relates to theoverall topic of this book and the workshop from which it results, which istop-down causation.The outline of this paper is as follows: First, I will give a series of examples ofcondensed-matter systems that show emergent phenomena and that illustratethe issues to be discussed subsequently. Then, I will define the concepts ofreduction and emergence, with a focus on the distinction between weak andstrong emergence, as they will be used later in the article. Next, using thetexts by three Nobel laureates in condensed matter theory, I will show howcondensed matter research is done in practice, and I will supplement it withinsights from my own field of expertise, which is statistical physics. Based onall this information, we will then obtain list of reasons for accepting strongemergence in physics. Finally, I will deal with some widespread objections. Solids, liquids and gases are systems of 10 particles that show many propertiesthat the particles themselves don’t possess: pressure, temperature, compress-ibility, electric conductivity, magnetism, specific heat, crystal structure, etc.It is an important goal of statistical physics and condensed matter theory toexplain or even predict these properties in terms of the constituent atoms ormolecules and their interactions. And these two fields of physics have indeedbeen very successful at relating these properties to the microscopic makeup of2he respective system.One reason for this success is that these systems can be discussed withoutneed to refer explicitly to their context. The listed properties are properties ofan equilibrium system. The wider context enters only implicitly as it determineswhich objects are present, how they are arranged, and what are the environ-mental variables, such as temperature or pressure or the applied electrical ormagnetic field.Other condensed matter systems are open systems or driven systems: theycan show patterns and structures that depend crucially on their being embeddedin a certain context, as they obtain a continuous input of energy and/or matterfrom their environment and pass energy and matter to their environment in adifferent form. An important example is thermal convection: when a gas orliquid is heated from below such that it is cooler at the upper surface than atthe lower, the gas or liquid can be set into motion to form of convection rollsand thus transports heat efficiently from the warm to the cool surface. Suchdifferences in temperature drive to a considerable part the weather and climateon earth. In this case, the temperature differences are due to the sun’s radiationheating the earth surface more at some places than at others. A particularlyfascinating example of patterns in open systems are spiral patterns. They arefor instance observed in the Belousov-Zhabotinsky reaction, which is an oscillat-ing chemical reaction that cycles through three different reaction products, withthe presence of one aiding the production of the next one. When put in a petridish, one can observe spiral patterns. This pattern persists only for a limitedtime unless reaction products are continually removed and reaction educts arecontinually added. The most complex open systems are living systems. Theyrequire a continuous supply of resources from which they obtain their energy, forinstance oxygen and food, and they pass the reaction products, such as carbondioxide and water, to the environment. More importantly, they communicatecontinuously with their environment, obtaining from it cues about danger andfood, and shaping it for instance by building burrows, farming lice, exchang-ing information with their conspecifics and other species, and by pursuing theproduction and survival of their offspring.Driven systems are also investigated by theoretical physicists, employing the-ories that refer to the parts of the systems and their interactions, however, suchtheories use a different level of description and don’t usually refer to quantummechanics. In this article, we will mainly focus on the first class of systems, asthey appear to be less complex and more easily accessible to a description by afundamental microscopic theory. As already indicated, the natural sciences, in particular physics, work in a re-ductionist way: they aim at explaining the properties of a system in terms ofits constitutents and their properties, and they are very successful at this en-deavor. More precisely, physicists perform a theoretical reduction (and not anontological reduction), which means that the property that one wants to explainis obtained mathematically by using elements of a microscopic theory that dealswith the constituents of the system. Thus, for instance, the pressure of a gas isexpressed in terms of the force that the atoms exert on the surface when they3re reflected from it, and the temperature in terms of the kinetic energy of theatoms. The electrical resistivity is obtained from a mathematical description ofthe collisions of electrons with lattice vibrations and crystal defects. The oscilla-tions and waves observed in the Belousov-Zhabotinsky reaction are reproducedfrom a set of chemical reaction equations for the concentrations of the moleculesoccurring in the system. Even beyond the realm of physics proper this type ofapproach is successful: the occurrence of traffic jams can be calculated based onmodels for cars that follow simple rules involving preferred velocities and pre-ferred distances, the dynamics of ecosystems from biological populations feedingon each other, and stock market crashes from models of interacting agents.Emergence in a very general sense is the occurrence of properties in a sys-tem that the constituents don’t have. In the previous section, examples for suchemergent properties were given. So far there is nothing contentious about thesegeneral concepts of emergence and reduction. The discussion starts when onewants to assess the extent and nature of this reduction or emergence. Broadlyspeaking, there are two possible points of view: Either reduction is complete,at least in principle, and therefore emergence is weak; or reduction is incom-plete and emergence is strong. Complete reduction means that the propertyor phenomenon to be described is completely contained in and implied by themicroscopic theory. Incomplete reduction means that although the microscopictheory is successfully used, one invokes additional assumptions, approximations,hypotheses, or principles that are not part of the microscopic theory. Concor-dantly, weak emergence means that although the system has new properties,which are the emergent properties, these are fully accounted for by the micro-scopic theory. Strong emergence, in contrast, means that there is an irreducible,generically new aspect to the properties of the system as a whole that is notcontained in or implied by the underlying theory. Strong emergence is closelyassociated with top-down causation. If higher-level properties are not fully ac-counted for by the lower-level theory, but if they affect the behavior of thesystem, then they also affect the way in which the constituents behave. Thismeans that there is a top-down causation in addition to bottom-up causation.The picture underlying these concepts of emergence and reduction is thatof a hierarchy of entities and systems, with the objects of one hierarchical levelbeing the parts of the objects on the next level. Several such hierarchies canbe constructed. One hierarchy that leads up to human societies is the follow-ing: Elementary particles – atoms – molecules – cells – individuals – societies.Another hierachy that stays within the world of physics is this one: Elemen-tary particles – atoms – solids and fluids – planets and stars – solar systems –galaxies – universe.In this article, we will mainly deal with condensed matter systems suchas given in the first group of examples above. This means that we focus onthe relation between the physics of mesoscopic or macroscopic systems andthe relevant microscopic theory, which is a quantum mechanical description ofapproximately 10 atomic nuclei and their surrounding electrons. In terms ofthe hierarchy of objects, we discuss the relation between atoms and solids.4 Condensed matter research in practice
In the following, I will quote three Nobel Laureates in condensed matter theoryas they explain how research in their field proceeds and how it is related tothe supposedly fundamental quantum theory. All of them present strong argu-ments against a reductionist view. Nevertheless, two of them make surprisinglyambivalent statements concerning the nature of reduction.Since condensed-matter physics often uses methods from statistical mechan-ics and since statistical mechanics is my field of expertise, I will present ad-ditional arguments against the reductionist view by explaining how statisticalmechanics relates to quantum mechanics and also to classical mechanics (whichis for historical reasons still often used as a fundamental microscopic theorywhen justifying the laws of statistical mechanics).
In his famous paper ’More is different’ [1], Anderson begins with the followingsentences:The reductionist hypothesis may still be a topic for controversyamong philosophers, but among the great majority of active sci-entists, I think it is accepted without question. The workings ofour minds and bodies, and of all the animate or inanimate matter ofwhich we have any detailed knowlddge, are assumed to be controlledby the same set of fundamental laws.Somewhat further on, he explains:(...) the reductionist hypothesis does not by any means imply a’constructionist’ one: The ability to reduce everything to simplefundamental laws does not imply the possibility to start from thoselaws and reconstruct the universe. (..) At each level of complexityentirely new properties appear, and the understanding of the newbehaviors requires research which I think is as fundamental in itsnature as any other.Then he gives the reason why this is so: It is because of symmetry breaking. Themicroscopic theory is invariant under spatial translation or rotation and thusfor instance forbids the occurence of nonzero electrical dipole moments or ofcrystal lattices. This is because in an electrical dipole the negative charges havea different center than the positive charges, which means that a particular direc-tion – that in which the negative charges are shifted – is singled out. Similarly,the atoms on a lattice have preferred positions, and the lattice has main axesthat point in specific directions. When a direction is singled out, rotationalsymmetry, which treats all directions equally, is broken. When lattice posi-tions are singled out, translational symmetry, which treats all positions equally,is broken. Another important case of symmetry breaking is the handedness ofmany biological molecules. This is an example that Anderson focuses on: Whilevery small molecules, such as ammonia, can tunnel between the two tetrahedralconfigurations with a high frequency, and thus have on average zero electrical5ipole moment, larger molecules, such as sugar, cannot tunnel during any rea-sonable time period. Symmetry breaking is very important in condensed matterphysics: When the energetically favored state is a symmetry-broken state, thesystem chooses spontaneously one of these possible states, all of which have thesame energy, as is analyzed in depth in the theory of phase transitions. This isrelevant to phenomena such as superconductivity, magnetism, and superfluidity.Anderson’s conclusion is that because of symmetry breaking the whole becomesdifferent from the sum of its parts.There is an interesting dichotomy in this article: on the one hand, Andersonemphasizes more than once that he accepts reductionism (in the sense that ’ev-erything obeys the same fundamental laws’), on the other hand he gives greatarguments why reductionism does not work. Even more, he admits that thereis a logical contradiction between the laws of quantum mechanics and the factthe molecules have symmetry-breaking structures. Since he wrote his article,there has been a lot of progress in this field, and several authors have pointedout that in fact the problems of symmetry breaking and of molecular struc-ture are closely related to the problems of interpreting the quantum mechanicalmeasurement process and of understanding the relation between quantum me-chanics and classical mechanics [2, 3, 4]. We will address this problem furtherbelow.
In his often-cited article with Pines “The Theory of Everything” [5], and alsoin his popular-science book “A different universe” [6] Laughlin expresses a sim-ilar view to Anderson. The authors of [5] label the Schr¨odinger equation thatdescribes all the electrons and nuclei of a system as the “Theory of Everything”for condensed-matter physics and explain:We know that this equation is correct because it has been solved ac-curately for small numbers of particles and found to agree in minutedetail with experiment. However, it cannot be solved accuratelywhen the number of particles exceeds about 10. No computer (...)that will ever exist can break this barrier because there is a catas-trophy of dimension. (....) We have succeeded in reducing all ofordinary physical behavior to a simple, correct Theory of Every-thing only to discover that it has revealed exactly nothing aboutmany things of great importance.Just as Anderson, the authors first appear to commit to reductionism, andthen explain why it is of no use for many important problems in condensedmatter theory. Confusingly, they do not clarify what they mean by “correct”or “accurate”; they cannot possibly mean that the theory is exact as they writethemselves that it neglects several effects (such as the coupling to photons).Apparently they think that these effects are unimportant even when dealingwith solids consisting of 10 atoms.Their arguments why the “Theory of Everything” is useless for condensedmatter theory include symmetry breaking, but are more general: In his book,Laughlin calls the principles determining many emergent features ’Higher-orderprinciples’ (HOPs). In the article, he and Pines explain it as follows:6xperiments of this kind (i.e., condensed-matter experiments thatallow us to measure the natural constants with extremely high pre-cision) work because there are higher organizing principles in naturethat make them work. The Josephson quantum is exact because ofthe principle of continuous symmetry breaking. The quantum Halleffect is exact because of localization. (....) Both are transcendent,in that they would continue to be true and to lead to exact resultseven if the Theory of Everything was changed.They go on to explain that “for at least some fundamental things in nature theTheory of Everything is irrelevant.” They introduce the concept of a quantumprotectorate,a stable state of matter whose generic low-energy properties are de-termined by a higher organizing principle and nothing else. (...) Thelow-energy excited quantum states of these systems are particles inexactly the same sense that the electron in the vacuum of quantumelectrodynamics is a particle. (...) The nature of the underlyingtheory is unknowable until one raises the energy scale sufficiently toescape protection.In fact, even though they do not mention this, the Theory of Everything, i.e.,the many-particle Schr¨odinger equation, is also such a quantum protectorate,valid at low energies, obtained from general considerations, and independent ofmore microscopic theories, such as quantum field theories or string theories.All this raises the question whether it is logically consistent to claim that asystem is determined by a micropscopic law and to claim simultaneously thatthis law is irrelevant since higher-order principles govern the behavior of thesystem. This is in my view the most interesting question raised by the article,but the authors don’t address it. We will come back to this question later. Whilethe examples chosen by Anderson were taken mainly from the room-temperatureclassical world, the examples used by Laughlin and Pines are mainly quantumphenomena. Therefore, the issue now is not primarily the relation betweenthe quantum description and the classical description, but the relation betweendifferent quantum descriptions (the microscopic one and the one based on HOPs)of the same system. In his article “On the nature of research in condensed-state physics” [7], LeggettwritesNo significant advance in the theory of matter in bulk has ever comeabout through derivation from microscopic principles. (...) I would The Josephson quantum is due to the fact that magnetic flux surrounded by a super-conducting current can only be an integer multiple of a basic flux unit. This is because themagnetic flux affects the phase of the superconducting wave function, but the wave functionmust have a again the same phase after going around the flux line once. The Hall effect gives rise to a transverse electric field in a two-dimensional conductor towhich a magnetic field is applied that is perpendicular to the current. The Hall resistanceis the ratio of the transverse electrical field to the electrical current, and it takes quantizedvalues when the material is cooled down sufficiently. The explanation of this effect involvesconsiderations about the topology of the wave function. all the propertiesof condensed-matter systems are simply consequences of the proper-ties of their atomic-level constituents, but that there is a very goodreason not to believe it. (...) Indeed, I would be perfectly happy toshare the conventional reductionist prejudice were it not for a singlefact (...) which is so overwhelming in its implications that it forcesus to challenge even what we might think of as the most basic com-mon sense. This fact is the existence of the quantum measurementparadox.(...) this argument implies that quantum mechanics, of itsnature, cannot give a complete account of all the properties of theworld at the macroscopic level. (...) It follows that somewhere alongthe line from the atom to human consciousness quantum mechanicsmust break down.I fully agree with his characterization of how condensed matter physics is donein practice and with his diagnosis that there must be limits to the validity ofquantum mechanics. In fact, together with George Ellis, I have written a paperthat interprets the quantum measurement process in terms of top-down effectsfrom the classical world on the quantum world [8].
After these three examples from solid-state physics, I want to conclude this sec-tion with an example from my own field, which is statistical mechanics. Statisti-cal mechanics does not deal with a particular system, but provides a frameworkfor calculating properties of systems of 10 particles at finite temperature, bethey solids, liquids, or gases. Condensed matter theory often uses concepts andcalculations taken from statistical mechanics, for instance when calculating the9pecific heat of a system, the phase transition between a paramagnet and aferromagnet, or the conditions (temperature, magnetic field, or size of electricalcurrent) that destroy superconductivity. This means that one does not use thesupposedly ’fundamental’ theory but a theory that brings with it new concepts.The basic concept of statistical mechanics is that of the probability of a stateof a system or a subpart of the system. For instance, a fundamental theorem ofstatistical mechanics, from which almost everything else can be derived, is thatin an isolated system in equilibrium all states that are accessible to a system oc-cur with the same probability. Such an equilibrium state has maximum entropy,and therefore this theorem is closely related to the second law of thermodynam-ics, which states that entropy increases in a closed system until the equilibriumstate is reached. Part of the textbooks of statistical mechanics and articles of thefoundations of statistical physics aim at deriving these probabilistic rules froma microscopic deterministic theory, either classical mechanics (where the atomsof a gas are treated as little hard balls) or quantum mechanics. A close lookat these derivations, however, reveals that they in fact always put in by handwhat they want to get out: randomness. The initial state of the system must bea ’random’ state among all those that cannot be distinguished from each otherwhen specifying them only with finite precision. The apparent randomness inthe future time evolution then follows from this hidden, unperceivable initialrandomness. This means that the randomness of statistical mechanics is notreduced to a deterministic ’fundamental’ theory, but it is only moved back tothe initial conditions and hidden there. Of course, a truly deterministic worldwould not leave us the freedom to say that the precise initial state does notmatter because the real one can be any of them. The initial state would be thestate it is and not care about our ingnorance of finer details. It is amazing thatnature conforms to our ignorance and behaves as if there was nothing special tothe initial state that would later lead to surprising effects in the time evolution.The natural conclusion, which however goes against the reductionist agenda,woud be to say that the mathematical concept of infinite precision underlyingthe equations of the ’fundamental’ theories has no support from empirical sci-ence. In fact, when there are no specific top-down effects since the system isisolated, the system becomes maximally indifferent as to the state in which ittends to be. This is what the second law of thermodynamics states. This topicis discussed in more detail in publications by Nicolas Gisin [9, 10] and by myself[11, 12].Interestingly, there is a close relation between the quantum measurementproblem and the problem of relating statistical mechanics to quantum mechan-ics: quantum measurement and statistical mechanics both involve probabilitiesand irreversibility. Both deal with systems of a macroscopic number of parti-cles (in the case of quantum measurement this is the measurement apparatus).Concordantly, the theories dealing with the two problems involve similar ideasand types of calculations: they are based on decoherence theory which explainswhy quantum mechanical superpositions vanish when a quantum particle inter-acts with a macroscopic number of other particles. These calculations – notsurprisingly – again need to invoke random initial states. In contrast to theo-ries that ’derive’ statistical mechanics from classical mechanics, these randominitial states are however not sufficient in order to ’derive’ statistical mechanicsfrom quantum mechanics, because the calculations give all possible stochastictrajectories simultaneously instead of picking one of them for each realization.10o we arrive again at the point emphasized by A. Leggett: Quantum mechanicsis not consistent with physics of macroscopic, finite-temperature objects. Thismeans that there must be limits of validity to quantum mechanics. The foregoing quotations and discussions have made clear that a full reductionis never done in practice, and that it is impossible for several reasons. The moretrivial reason is limited computing power, not just at the present time but forall future since the calculation of the time evolution of the quantum state ofas few as 1000 particles would require more information than contained in theuniverse. This means that the belief in full reduction is a metaphysical belief,as it can never, even in principle be tested. In contrast, physics is an empiricalscience rooted in what can be measured and observed.But beyond this, I think there are good reasons why reductionism even whentaken as a metaphysical belief is wrong. As most clearly stated by A. Leggett,there is a logical incompatibility between quantum mechanics, which leads tosuperposition of macroscopic objects being in different locations, and the obser-vation that macroscopic objects are localized in space.It is my impression that in fact all the effective theories and models andapproximations made in order to obtain the properties of macroscopic systems,involve assumptions and steps that are in contradiction with the supposedly fun-damental theory. For instance, the derivations of the phonon (lattice vibration)spectrum of solids starts by separating the equation for the electrons from thatfor the nuclei by using the so-called Born-Oppenheimer approximation. Thisapproximation is in fact a mixture of quantum theory and classical theory, as itassumes that the atomic nuclei are localized in space and not in superpositionsof different locations. It is also the basis of quantum chemistry and the widely-used density functional theory. A nice discussion of this can be found in thebook [3]. Similarly, other theories in condensed matter make assumptions andapproximations that deviate from the linear, deterministic, microscopic Theoryof Everything. It appears to me that they all put in contextual information thatis not intrinsic to the Theory of Everything and that contains elements fromclassical physics and oftentimes also of statistical physics.And there are many more reasons why full reductionism is wrong, as listedin the following subsections.
The mental picture that many people have when they think of emergence is thatthere are first the parts, and then they get together and form the whole. But thispicture is only half the truth. These parts would not be there without a largercontext that permits their existence and determines their properties. Whichtypes of elementary particles exist in the universe depends on the properties ofthe quantum vacuum, namely its symmetries and degrees of freedom. Whetherthese particles can combine to create larger objects, is again determined by11he context: In the early universe, the temperature and density of the universedetermined which types of objects existed in it: quarks and gluons, or nucleiand electrons, or atoms and radiation, or stars and galaxies and planets.On smaller scales, one observes the same extent of context dependence, andmany examples are listed in the book by G. Ellis [13]: The lifetime of a neutrondepends on whether it is a free particle or part of a nucleus. In a crystal,the presence of the crystal structure permits the existence of phonons, and thesymmetries of the crystal determine their properties. The mass of an electrondepends on the band structure of the metal in which is is. Whether a substanceis liquid or solid depends on the environmental conditions, such as temperatureand pressure.
Full reductionism could only be correct if the supposedly fundamental laws wereextremely accurate. Otherwise, even minute imprecisions could become magni-fied in macroscopic system to the extent that the fundamental theory cannotpredict correctly the properties of the macroscopic system. But we know thatour present theories are not fully exact: They are idealizations that leave asidemany influences that affect a system. Newtonian mechanics, for instance, ig-nores the effect of friction, or includes it in a simple way, which is neitherexact nor derived from a microscopic theory. A large part of thermodynamicsis based on local equilibrium, which is not an exact but only an approximatedescription. Quantum field theory is burdened with exactly the same problemsas nonrelativistic quantum mechanics, namely the discrepancy between a uni-tary, deterministic time evolution applied after preparation of the initial stateand before measurement of the final state, and the probabilities and nonlin-ear expressions featuring in calculations of cross sections and transition rates.Furthermore, it could not yet be harmonized with Einstein’s theory of generalrelativity, which describes gravity.Batterman argues convincingly that physics theories are asymptotic theoriesthat become exact only in an asymptotic limit where a quantity goes to zero orinfinity [14]. Newtonian mechanics is a good example of how the applicability ofa theory depends on certain quantities being small or large: The velocity of lightmust be large compared to the velocities of the considered objects (otherwiseone needs to use the theory of special relativity), energies must be so large thatquantum effects can be ignored (otherwise one needs quantum mechanics), anddistances must be small compared to cosmic distances on which the curvature ofspace is felt (otherwise one needs general relativity). In earlier times, there wasa widespread belief that Newtonian mechanics is an exact theory, ony to realizein the last century that on all those scales that could not be explored before(such as the very small, the very large, and the very fast) Newtonian mechanicsbecomes invalid, and that it is a very good approximation, but not exact, onthose scales that had been explored. We can expect that our present theorieswill also turn out to become inappropriate when new parameter ranges, whichwe could not explore previously, become accessible to experiments. In fact, itappears impossible to have exact, comprehensive microscopic laws that governeverything that happens, and to have at the same time a complete insensitive-ness to these microscopic laws in systems that are determined by higher-orderprinciples. 12 .4 The microscopic world is not deterministic
One of the main shocks caused by quantum mechanics is the insight that themicroscopic nature is fundamentally indeterministic, as for instance visible inradioactive decay where nothing in the state of an atom allows one to predictwhen it will decay. Only the half life can be known and be calculated from amicroscopic theory. The same hold for the quantum-mechanical measurementprocess, where the experiment gives one of the possible outcomes with a prob-ability that can be calculated using the rules of quantum mechanics, but theprocess itself is stochastic with nothing in the initial state determining which ofthe outcomes will be observed.In order for full reductionism to hold, the microscopic theory must be de-terministic. Only then does the microscopic theory determine everything thathappens. Otherwise the microscopic theory can at best give probabilities forthe different possible events. When dealing with a system of many particles,one likely outcome of such stochastic dynamics is ergodicity, where the systemgoes to an equilibrium state that is characterized by a stationary probabilitydistribution. This is what happens for instance when a thermodynamic sys-tem such as a gas reaches its equilibrium state, which has maximum entropy.The macroscopic state variables characterizing this equilibrium state, such asvolume or pressure, are independent of the initial microscopic state and areessentially determined by the constraints imposed from the outside. Thermo-dynamic systems are in fact a nice example for top-down causation, as they doessentially nothing else but to adjust to whatever state is imposed on them bymanipulations from the outside, such as volume change or energy input.But even when chance does not lead to ergodicity, top-down causation isinvolved. When a quantum measurement event happens, the measurement ap-paratus determines the possible types of measurement outcomes (for instancewhether the z component or the x component of the spin is measured). Whenan excited atom goes to its ground state by emitting a photon, it can do soonly because the surrounding medium, the quantum vacuum, can take up thatphoton. When the atom is enclosed in a small cavity the size of which is nota multiple of the photon wavelength, the photon cannot be emitted. It thusappears that all instances of quantum chance are in fact strongly dependent ontop-down causation. A quantum object by itself, when it is carefully isolatedfrom interacting with the rest of the world, evolves according to the ’fundamen-tal’, deterministic, and linear quantum mechanical equations.Karl Popper made the interesting suggestion that chance at the lower levelis necessary for top-down causation from the higher level [15], and he has beencriticized for it [16]. However, I think that he is right. Only when the entitiesat the lower level are not fully controlled by the microscopic laws can theyrespond to the higher level. It is often argued that random changes are as littlesusceptible to top-down effects as are deterministic changes. But this is basedon the wrong premise that stochasticity is an intrinsic property of the lower-levelsystem, while it arises in fact from the interaction of the lower-level constituentswith the larger context. 13 .5 Emergent properties are insensitive to microscopic de-tails
As mentioned above in Sec. 4.2, Laughlin has emphasized a lot that emer-gent properties are insensitive to microscopic details. It is the higher-orderprinciples that determine the behavior of the systems he discusses. There areindeed many examples where systems that are microscopically different showthe same macroscopic behavior and are described by the same mathematicaltheory. For instance, the phase transition from a paramagnet to a ferromagnetbelow the Curie temperature is described by the same mathematics as the Higgsmechanism that can successfully explain how particles obtain their mass. Fur-thermore, within one type of systems, for instance a ferromagnet made of iron,the mathematical description in the form of macroscopic or effective variablesis independent of the precise microscopic arrangement of atoms and defects.There exist many microscopic realizations of the same macroscopic state. Thisis also emphasized in the book by George Ellis when he mentions the conceptof ’multiple realizability’ of a macroscopic state by microscopic states. Whenadditionally the time evolution can be expressed in terms of the macroscopicvariables alone, this means that the causal processes can be described on thelevel of the macrostate. A natural conclusion is that in such cases causal pro-cesses do indeed occur at the level of the macrostates and that the microstatesmerely adjust to the constraints imposed by the macrostate.
So far, we have mainly addressed equilibrium systems, which can be cut off fromtheir environment without losing their properties. In fact, this is an approximatestatement, as no system can be completely cut off from the environment; thebest one can do is putting a system into a closed box or lab with no directedinput or output of energy or matter. The typical solid-state systems discussed bythe above-cited Nobel Laureates remain what they are under such conditions, atleast on time scales relevant for experiments. On the very long run, they will bechanged by surface reactions, by radioactive radiation, and ultimately by the sunturning into a red giant and burning everything on earth. In contrast to theserelatively stable systems, open (or dissipative) systems require an ongoing inputand output of energy and/or matter in order to remain what they are. We havegiven in Section 2 the examples of convection patterns, oscillating reactions, andliving organisms. For these systems, the idea that they are determined by theirparts and the interactions of their parts, is completely wrong, as they are whatthey are only in contact with their environment: Convection patterns requirean input of heat at one end and a cooling surface at the other end; oscillatingreaction can only be sustained when certain reactants are continually suppliedand certain reaction products removed; living beings need to breathe and feed,and they respond in a complex way to cues from their environment. Since allthese systems exist only due to being continually sustained by their environment,it is completely wrong to think of them merely in terms of their parts and theinteractions of their parts. The emergent features of these systems are thereforeclear-cut cases of top-down causation. 14
Answers to objections
When discussing the issue of strong emergence in physics, a variety of objectionsare being made, which shall be dealt with in the following.
Quite a few scientists hope that even if our present microscopic theories are notyet the final, correct theories, the progress of physics will ultimately lead to suchfinal theories.If such theories will ever be found, they must achieve a lot: They must solvethe quantum measurement problem, and they must also establish a relationbetween general relativity and quantum physics. In the light of the argumentspresented in this article, it seems impossbile that such a theory exists. The top-down effects of the larger, macroscopic context on the microscopic constituentscannot be captured by a purely microscopic theory.Some people argue that even if the ultimate theory might be unknowable,nature cans nevertheless be governed by basic laws which are known to us onlyin approximate versions. This is a valid philosophical position, but it appears tobe rooted more in a metaphysical commitment than in emiprical evidence. Thiscommitment is to physicalism and to causal closure. Both are very restrictiveassumptions that can be doubted on philosophical grounds. The strongest argu-ments against physicalism are based on human consciousness, and are broughtforward by authors such as Brigitte Falkenburg [17], Thomas Nagel [18], orMarkus Grbriel [19]. All the arguments brought forward by George Ellis infavor of top-down causation are also arguments against causal closure. Thesearguments show that material systems are susceptible even to non-material in-fluences such as goals, ideas, or man-made conventions.
In the 19th and early 20th centure, the British emergentists argued in favorof strong emergence based on chemistry and biology [20, 16]. They could notimagine that chemistry of biology obeys the laws of physics, and they assumedthat complex objects are subject to different laws. But after the successes ofquantum chemistry at explaining the periodic table and calculating molecularstructures, and the success of biology at reducing inheritance to the propertiesof the DNA molecule, these emergentist positions came in discredit. By analogy,those who hold to strong emergentist views, are told that progress of sciencewill probably prove them wrong. If we cannot explain a macroscopic feature interms of a microscopic theory today, it might become possible in the future.However, the arguments presented in this article are not arguments fromignorance, but from the very nature of the systems under study. Even theclaimed reduction of chemistry or biology to (quantum) physics is incomplete.It is a partial reduction that invokes a collection of models and arguments thatincludes features from the quantum world as well as from the classical world, asexplained in Sec. 5.1. In particular life is so dependent on its environment thatthe reductionist enterprise is doomed to failure on principal grounds.15 .3 There are fully reductionist explanations for the quantum-classical transition and the second law of thermody-namics
This is an expert discussion that goes into the details of the theories offered.As I have tried to convey above, all these theories need to invoke additionalconcepts beyond the ’fundamental’ Theory of Everything. In one form or an-other, they all rely on some type of randomness of initial states or environmentalstates. Furtheremore, since quantum mechanics is a linear theory, one cannotavoid the resulting superpositions of macroscopic states. Pointing out that thesesuperpositions can look like a classical combination of the different possible out-comes with their associated probabilities does not fully solve the problem, asnature realizes in each instance only one of the possible outcomes. Of course,there are interpretations of quantum mechanics that deal with this issue (suchas the many-worlds interpretation, the statistical interpretation, consistent his-tories, relational interpretation), but all of them give up on the goal of scienceof accounting for an objective, observer-independent reality with its contingent,particular history. I am not willing to abandon this goal, and I think that aban-doning this goals hampers the progress of science and distracts from the openproblems that await solutions.
The idea behind this objection is that the context that produces the top-downeffects is itself composed of atoms and can be described by a microscopic theory.Instead of having the description in terms of a system and a context, one coulddescribe both system and context microscopically.However, apart from being impossible in practice, there are several reasonswhy this is not possible in principle: First, there is no isolated system, sinceevery system emits thermal radiation to the environment, which in turn passesit on to open space. Furthermore, every system is exposed to the influence ofgravitational forces, which cannot be shielded by any means. Third, having aclosed system leads again to all the problems related to interpreting quantummechanics.Some authors hold that the universe as a whole is a closed system and cantherefore be described by a microscopic theory for all its particles and theirinteractions. However, the driving force behind everything that happens in theuniverse is the expansion of the universe, starting from a very special initialstate. Neither the initial state, nor the expansion results from the interactionof the particles, and therefore the claim that the universe is determined by itsparts and their interactions is wrong.
To conclude, I see many reasons to reject the view that the world of physicsis causally closed with everything being determined bottom-up by fundamentalmicroscopic laws. As stated in the book by George Ellis: The lower (micro-scopic) level enables everything and underlies everything, but does not deter-16ine everything. The higher hierarchical levels have an important say at whathappens in nature. Anthony Leggett writes in his above-cited article that thenon-reductionist view is a minority view among professional physicists. In fact,I am not so sure about this. Clearly, the majority of people who write and talkon such foundational topics, have a more-or-less reductionist view. However,when I talk to colleagues it appears to me that many of them are aware thatthe world described by physics is not a monolithic block that is controled bya small set of rules. Maybe that those people who have not simple clear viewbut think that physics is more complex usually don’t write papers on this topic.This is one of the reasons why I decided to write this paper.
References [1] Philip W Anderson. More is different.
Science , 177(4047):393–396, 1972.[2] Hans Primas.
Chemistry, quantum mechanics and reductionism: perspec-tives in theoretical chemistry , volume 24. Springer-Verlag Heidelberg Berlin,2013.[3] Sergio Chibbaro, Lamberto Rondoni, and Angelo Vulpiani.
Reductionism,emergence and levels of reality, Ch. 6 . Springer, 2014.[4] Edit Matyus. Pre-born-oppenheimer molecular structure theory. arXivpreprint arXiv:1801.05885 , 2018.[5] Robert B Laughlin and David Pines. The theory of everything.
Proceedingsof the National Academy of Sciences of the United States of America , pages28–31, 2000.[6] Robert B Laughlin.
A different universe: Reinventing physics from thebottom down . Basic books, 2008.[7] Anthony J Leggett. On the nature of research in condensed-state physics.
Foundations of Physics , 22(2):221–233, 1992.[8] Barbara Drossel and George Ellis. Contextual wavefunction collapse: Anintegrated theory of quantum measurement.
New Journal of Physics ,20(11):113025, 2018.[9] Nicolas Gisin. Time really passes, science can’t deny that. In Stupar S.Renner R., editor,
Time in Physics , pages 1–15. Birkhauser, 2017.[10] Nicolas Gisin. Indeterminism in physics, classical chaos and bohmian me-chanics. are real numbers really real? arXiv preprint arXiv:1803.06824 ,2018.[11] Barbara Drossel. On the relation between the second law of thermodynam-ics and classical and quantum mechanics. In
Why More Is Different , pages41–54. Springer, 2015.[12] Barbara Drossel. Ten reasons why a thermalized system cannot be de-scribed by a many-particle wave function.
Studies in History and Phi-losophy of Science Part B: Studies in History and Philosophy of ModernPhysics , 58:12–21, 2017. 1713] George Ellis.
How can physics underlie the mind?: top-down causation inthe human context . Springer Heidelberg, 2016.[14] Robert W Batterman.
The devil in the details: Asymptotic reasoning inexplanation, reduction, and emergence . Oxford University Press, 2001.[15] Karl R. Popper and John C. Eccles.
The Self and Its Brain: An Argumentfor Interactionism . Springer Heidelberg Berlin, 1977.[16] Timothy O’Connor and Hong Yu Wong. Emergent properties. In Ed-ward N. Zalta, editor,
The Stanford Encyclopedia of Philosophy . Meta-physics Research Lab, Stanford University, summer 2015 edition, 2015.[17] Brigitte Falkenburg.
Mythos Determinismus: wieviel erkl¨art uns die Hirn-forschung?
Springer-Verlag Berlin Heidelberg, 2012.[18] Thomas Nagel.
Mind and cosmos: why the materialist neo-Darwinian con-ception of nature is almost certainly false . Oxford University Press, 2012.[19] Markus Gabriel.
I am Not a Brain: Philosophy of Mind for the 21st Cen-tury . John Wiley & Sons, 2017.[20] Brian McLaughlin et al. The rise and fall of british emergentism.