aa r X i v : . [ phy s i c s . h i s t - ph ] J a n Regularizing (away) vacuum energy
Adam Koberinski ∗† Forthcoming in Foundations of Physics
Abstract
In this paper I formulate Minimal Requirements for Candidate Predictions in quantum field theories,inspired by viewing the standard model as an effective field theory. I then survey standard effectivefield theory regularization procedures, to see if the vacuum expectation value of energy density ( h ρ i ) isa quantity that meets these requirements. The verdict is negative, leading to the conclusion that h ρ i isnot a physically significant quantity in the standard model. Rigorous extensions of flat space quantumfield theory eliminate h ρ i from their conceptual framework, indicating that it lacks physical significancein the framework of quantum field theory more broadly. This result has consequences for problems incosmology and quantum gravity, as it suggests that the correct solution to the cosmological constantproblem involves a revision of the vacuum concept within quantum field theory. The cosmological constant problem has been a major focus of physicists working on theories of quantumgravity since at least the mid-1980s. The problem originates with unpublished remarks by Pauli, whileinterest in the problem increased in the 1980s due to inflation. Weinberg (1989) famously laid out the thestate of the field in the late 1980s, and used anthropic considerations to place bounds on the possible valuesof a cosmological constant in the Einstein field equations. The problem arises in a semiclassical mergingof quantum field theory (QFT) and general relativity, where the stress-energy tensor for classical matter isreplaced by an expectation value of the stress-energy tensor predicted by a particular model of QFT. Whenone does this, the vacuum expectation values of energy densities for each field have the same form as acosmological constant term (i.e., a constant multiple of the metric), and so should contribute to the observedcosmological constant. However, when one takes a standard “prediction” of the combined vacuum energydensities from a model of QFT, the result is dozens of orders of magnitude larger than what is observed.Candidate solutions to the problem attempt to introduce new physics to reconcile the semiclassical predictionwith observation; the predominant view in the physics literature is that an acceptable candidate for a theoryof quantum gravity must solve the cosmological constant problem. Though many toy models have beenproposed, there is no agreed upon solution pointing the way to the correct theory of quantum gravity.The stubborn persistence of the cosmological constant problem provides motivation for a more detailedphilosophical analysis of its assumptions. Assuming the “old-fashioned” view of renormalization, Koberinski(2017) breaks down the steps required to formulate the problem, and criticizes the justification behind eachstep. One of these steps involves the assumption that models of QFT predict the vacuum expectation valueof energy density, h ρ i . The prediction is taken to indicate that h ρ i is a physically significant quantity inthe standard model. However, the problem changes shape when one accounts for the fact that the standardmodel is widely believed to be an effective field theory (EFT), with a built-in energy scale at which it breaksdown. The EFT approach to QFTs makes sense of the old requirement of renormalizability, and uses therenormalization group equations to understand renormalization non-perturbatively. ∗ Department of Philosophy, University of Waterloo, Waterloo, ON N2L 3G1, Canada † [email protected] For recent philosophical discussions of EFTs, see Wallace (2018), Williams (2018), J. D. Fraser (2018), Ruetsche (2020),and Koberinski and Smeenk (2021).
1s is well known, QFTs require renormalization in order to generate finite predictions. Renormalizationconsists of two steps: first, one introduces regulators to replace infinite quantities with quantities dependingon an arbitrary parameter. The regulator µ must be such that (i) the regularized terms are rendered finitefor all finite values of µ , and (ii) the original divergent term is recovered in the limit µ → ∞ . Next, oneredefines some set of couplings such that the physically relevant value is independent of the regulator. Thenthe regulator is smoothly removed and the renormalized quantity remains finite. We say a model in QFTis renormalizable if all of its S-matrix elements can be made finite with a finite number of renormalizedparameters. Even in a renormalizable model, vacuum energy density can only be regularized, but notfully renormalized. Since vacuum energy density is not a renormalizable quantity and plays no role in theempirical success of the standard model, Koberinski (2017) argued that one should not treat any valueregulator-dependent value as a valid candidate prediction.If, instead of predicting a value for h ρ i , we simply expect the standard model to accommodate it asempirical input, the failure of naturalness prevents this weakened desideratum. In quantum electrodynamics(QED), for example, the electron mass and charge are renormalized to make the theory predictive. Thetheory takes these quantities as empirical inputs and therefore does not predict their values. Nevertheless,mass and charge are physically significant quantities in QED, necessary to the empirical success of the theoryas a whole. Unfortunately, h ρ i cannot be input as an empirical parameter in the same way, due to its radiativeinstability order by order in perturbation theory. Further, since it plays no role in the empirical success ofthe standard model, there is little reason for h ρ i to play a central role analogous to mass and charge. Thus,if QFTs don’t predict its value, it is best to understand vacuum energy density as outside their domain, andtherefore not physically significant to QFT. In light of the EFT view of the standard model, full renormalizability loses importance. If the standardmodel is an EFT, then (under the standard interpretation) it comes equipped with a physically significantcutoff scale and an infinite set of coupling constants consistent with the symmetries of the fields.The newcouplings with mass dimension greater than four (in four dimensional spacetime) will be nonrenormalizable,but will have coupling constants that are suppressed by the momentum cutoff: α i = g i /µ n . The explicitpresence of the regulator in these terms is not a problem, since the regulator µ is much larger than theenergy scales for which the effective theory is used. The renormalization group flow indicates that, atenergies E ≪ µ , only the renormalizable terms have any appreciable effect. However, at higher energies,one may indeed see small deviations from the purely renormalizable terms, and these may be due to higher-order terms. Therefore, suitably regularized, nonrenormalizable terms can be physically significant whensuppressed appropriately by a regulator. Renormalizability is no longer a requirement, so long as the effectsof nonrenormalizable terms become negligible at low energies.If a suitably regularized vacuum energy density meets the requirements of a prediction in the EFT frame-work, then perhaps one is justified in claiming that the standard model predicts its value. There exist severalregularization schemes for QFTs, and in general these will not agree on the algebraic form for any quantitiesuntil the renormalization procedure has been completed. Inspired by the EFT approach, and under the viewthat regulators are arbitrary, a suitable weakening of the requirement of renormalizability must satisfy thefollowing requirements:
Minimal Requirements for Candidate Predictions:
In order for a quantity within a model of QFTto count as a candidate prediction of some corresponding physical quantity, it must be the case that: (1) thequantity is largely insensitive to the regularization procedure; and (2) it is largely insensitive to changes tothe value of the regulator.
These requirements are motivated as follows. Violation of (1) would entail that different regularizationschemes might be physically meaningful in that they encode different ways of parameterizing/forgetting Koberinski and Smeenk (2021) provide a more sustained argument that the cosmological constant problem signals a failureof naturalness for vacuum energy, in QFT and in general relativity as an EFT. The solution proposed there is to embrace newheuristics in theory construction, and to accept the limitations of the EFT framework for understanding fundamental physics. Using precision tests of the standard model, one may find deviations from the predictions made using only the renormalizableterms. Examples of possible experimental tests include the anomalous magnetic moment of the electron or muon (Aoyama et al.2012; Koberinski and Smeenk 2020; Bennett et al. 2004) as well as the fine structure of positronium and muonium (Gurunget al. 2020). In all of these cases, small deviations from the predictions made using the renormalizable standard model may beaccounted for with higher-order couplings, suppressed by the physical cutoff scale. h ρ i violates both minimal requirements, and this is best understood in the context of EFTs.Physically significant quantities in a theory must be consistently described by that theory; if the standardmodel cannot provide a univocal, reasonable candidate prediction for the expectation value of vacuum energydensity, then that failure is evidence that h ρ i is not physically significant in the standard model. Borrowinga common example of a classical fluid mechanics from Wallace (2018), we know that EFTs cannot predictall possible quantities relevant to the low-energy, macroscopic physics. In fluid mechanics, the formation ofdroplets and shock waves depend on the microphysical details of the fluid. We cannot use the effective theoryof fluid mechanics to predict such behaviour, as the separability of scales breaks down. The underlyingmicrophysical theory is then needed. Droplet formation and shock waves are physically real phenomenadescribed by the microphysics, though fluid mechanics fails to describe them. I claim that the vacuumenergy density h ρ i is a similar quantity that falls outside the domain of QFT. Vacuum energy may be aphysically real phenomena, and some future theory may describe it, but it is beyond the scope of our bestQFTs. The EFT framework helps to make this point more salient, because EFTs are explicitly meant tobe limited in scope of applicability. The failure of h ρ i to satisfy either Minimal Requirement excludes it asa candidate for physical significance in QFT. Thus we should think of the cosmological constant problemas highlighting one limitation of our current best EFT. Since we are currently ignorant of the underlyingmicrophysical theory to which the standard model is effective, there is little we can say about vacuum energyat present. In a separate paper (Koberinski and Smeenk 2021) I provide more general arguments that wouldlead one to a similar conclusion, and extends to the semiclassical merging of QFT and general relativity.My goal here is to show that, from within QFT as an EFT, h ρ i fails to meet the Minimal Requirements fora candidate prediction, and vacuum energy is therefore ill-defined until the future microphysical theory isknown.Though this conclusion is easiest to see within the EFT framework, the argument extends to QFT morebroadly. Koberinski (2017) provides arguments for this conclusion in the context of the standard modelas a fully renormalizable standalone QFT, and in Section 3 I argue that more rigorous extensions of QFTeliminate h ρ i from their conceptual framework, thereby supporting the conclusion that vacuum energy fallsoutside the domain of QFT, in any of its guises.The strategy for the remainder of the paper is as follows. I provide a conceptual outline two majorregularization and renormalization procedures that one might apply to extract a finite prediction of h ρ i frommodels of QFT, and discuss ways in which vacuum energy is removed in more rigorous local formulations ofQFT. In Sec. 2 I consider the mainstream approaches to regularizing the standard model: lattice regular-ization and dimensional regularization. In Sec. 3 I consider some more mathematically rigorous approachesto QFT, and the ways that regularization and renormalization are treated there. In each case, I arrive at a By physical significance of vacuum energy density, I mean the inference from a vacuum expectation value of an energydensity term within a model of QFT to a real physical quantity onto which that value maps. One can believe that there issome real physical quantity of a suitably averaged value of vacuum energy density, to which our best physical theories don’taccurately map (cf. Schneider 2020). The arguments in this paper undermine taking values from QFT to map onto the world;they say nothing about whether vacuum energy density exists. Undermining the physical significance of vacuum energy densityfor QFTs means that we should not trust that our best QFTs to accurately capture the relevant physics. Continuing the processdiscussed in Saunders (2002), a further revision of the vacuum concept in QFT may be required, or perhaps even a full theoryof quantum gravity. h ρ i derived using that regularization scheme. Finally, in Sec. 4, I compare the results to see if theysatisfy the above requirements. As I will show below, purely regularized values of h ρ i satisfy neither MinimalRequirement, and we have no reason to accept a one-loop renormalized quantity as a candidate predictioneither. Further, rigorous extensions of QFT that aim to provide a local description of fields remove thequantity h ρ i entirely, suggesting that vacuum energy falls outside the scope of QFT and any merger of QFTand general relativity that emphasizes local covariance. h ρ i Standard cutoff regularization schemes in QFT require the inclusion of two momentum cutoffs: a lower boundto regulate the infrared divergences, and an upper bound to regulate the ultraviolet divergences. In positionspace, this is equivalent to defining the theory on a four-dimensional lattice in a box. Under the orthodoxreading of EFT, the upper bound gains physical significance as the scale at which the effective theory breaksdown. This view has recently been criticized (Rosaler and Harlander 2019), but is the dominant view ofparticle physicists and is becoming more mainstream amongst philosophers (Wallace 2011; Williams 2017;J. D. Fraser 2017). Below (Sec. 2.1) I will outline the textbook approach to cutoff regularization in moredetail, and discuss the modifications made to this formalism by the EFT view.Historically, dimensional regularization was the favoured scheme for renormalizing Yang-Mills gaugemodels of QFT, like the electroweak model and quantum chromodynamics. Though it has received lessphilosophical attention due to its more formal nature, dimensional regularization is a powerful tool, and onethat maintains Lorentz invariance. If one hopes to have a regularized candidate prediction of the vacuumenergy density from the standard model, it should obey the correct equation of state that is required by thecosmological constant. Dimensional regularization gives this equation of state and Lorentz invariance, and theone-loop renormalized value h ˜ ρ dim i (Eq. (15)) calculated using dimensional regularization thus provides thebest claim to a prediction of vacuum energy density from within the standard model. Thus, if any orthodoxquantity serves as a candidate prediction for vacuum energy density, it is h ˜ ρ dim i . However, the instability ofa one-loop renormalized vacuum energy density under radiative corrections indicates that naturalness failshere, and that vacuum energy may be sensitive to the details of high-energy physics. For simplicity, I will illustrate the regularization techniques using a free scalar field theory, whose action is S [ φ, J ] = − Z d x (cid:18) η µν ∂ µ φ ( x ) ∂ ν φ ( x ) + m φ ( x ) + J ( x ) φ ( x ) (cid:19) , (1)with η µν the Minkowski metric (here written with a ( − , + , + , +) signature), and the expression inside theintegral is the Lagrangian density L for the model, plus source term J ( x ) φ ( x ). One can define a particularmodel of QFT with a built-in set of cutoffs, or one can impose cutoffs on individual expressions as the needarises. The former accords more closely with the EFT view, while the latter was standard in the earlyhistory of quantum electrodynamics, and remains standard in most introductory texts. Under the latterview, cutoffs are imposed in order to regulate divergences, and are removed from the renormalized theory. We start with the latter approach to illustrate the algebraic form for expectation values of energy densityand pressure.In the case of calculating the energy density associated with the vacuum state, we are looking for thevacuum expectation value of the Hamiltonian density. In the case of the free scalar model, this is h ρ i = h | H | i = 12 h | (cid:0) ( ∂ t φ ) + δ ij ∂ i φ∂ j φ + m φ (cid:1) | i . (2) The lower bound may be interpreted as encoding the fact that QFTs are only used in local regions of spacetime. Imposingsome set of boundary conditions for long distances just means that we don’t expect the model to apply in all of spacetime. For a more detailed analysis of the differences between the two approaches to renormalization, see Williams (2018) andRivat (2019). The latter argues that EFTs are best understood strictly under cutoff regularization. However, as I show belowfor the vacuum energy density, many features of QFTs are most easily understood under dimensional regularization. φ one can calculate this to be (cf. IV.A Martin 2012, Eq. 68) h ρ i = 12(2 π ) Z d k ω k , (3)which diverges as k for large k . Similarly, the pressure associated with the vacuum energy is h p i = 16(2 π ) Z d k k ω k . (4)This is where one can regularize by introducing a momentum cutoff µ , above which one no longer integrates.Doing so, one obtains the following expressions for the energy density and pressure: h ρ i = µ π "s m µ (cid:18) m µ (cid:19) − m µ ln µm + µm s m µ ! , (5) h p i = µ π "s m µ (cid:18) − m µ (cid:19) + 3 m µ ln µm + µm s m µ ! . (6)There are two things to note here. First, to leading order, both regularized terms depend on the cutoffscale to the fourth power. This regularization is therefore highly sensitive to what one takes as the cutoff scale,violating Reasonable Requirement (2). Under the old approach, one could renormalize h ρ i by introducingcounterterms to remove any µ -dependence. Unfortunately, the renormalized term does not carry over ina straightforward way to a field theory with interactions. Though one could simply define h ρ physical i ≡ λφ theory, the coupling between vacuumand gravity will contain contributions proportional to λ , λ , λ and so on. If one defines h ρ physical i to beindependent of the cutoff scale at order λ , then equally large ( ∼ µ ) contributions spoil this cancellation atorder λ , and so on for higher orders. So the value of h ρ i in Eq. (5) cannot be fully renormalized, and as itstands depends too sensitively on the (supposedly arbitrary) cutoff scale to count as a prediction.Second, notice that the ratio h p i / h ρ i 6 = −
1, as one would expect from a Lorentz-invariant vacuum. Thisis because the cutoff procedure is itself not Lorentz-invariant. In order to obtain a vacuum energy densitythat respects the Lorentz symmetry and reproduces the equation of state required by a cosmological constantterm, one must subtract the leading order µ terms in each, which is only justified in the context of modifiedminimal subtraction schemes using dimensional regularization.The above discussion is framed in the old-fashioned context of ad-hoc regularization. What changes whenwe think of QFTs as EFTs, where the cutoff plays a more direct role? In the EFT framework, a QFT isdefined with a built-in UV cutoff. To make the overall theory finite, an IR regulator is often used, thoughthis may be smoothly removed at the end of the calculation to return to a continuum theory. I start withboth regulators, which effectively places the field theory on a Euclidean lattice, converting the integrals inthe action and the over field operations into discrete sums. For 4D lattice spacing a , placing the model inhypercube of length L , the generating functional becomes Z [ J ] = Z µ D φ exp (cid:18) i Z d x [ L ( φ ( x )) + J ( x ) φ ( x )] (cid:19) (7) ≡ Z N Y l =1 dφ l exp ia N X l =1 [ L ( φ j ) + J l φ l ] ! , (8)where N = ( L/a ) and µ = 2 π/a . The quantities a and L are built-in ultraviolet and infrared regulators.Once a set of fields is specified, along with the expected symmetries of the model, the Lagrangian is defined toinclude all terms involving the chosen fields and respecting the symmetries; this means that the Lagrangianis likely to be a formally infinite sum of terms, each multiplied by its own coupling constant. As initiallystated, this would be a major problem; though the path integral has been IR and UV regulated, we nowhave an infinite number of terms in the Lagrangian. There is no a priori reason to expect that the bare5oupling parameters decrease for higher-order field contributions, and thus no indication of an appropriatetruncation of terms in the Lagrangian.However, one uses the renormalization group transformations to rewrite the generating functional in termsof a new, lower ultraviolet cutoff µ ′ = µ − δµ . One separates the integral over field configurations R µ D φ → R µ ′ D φ µ ′ R δµ D φ δµ , and integrates out the field modes φ δµ . The amazing feature of the renormalization groupis that, when one does this, the new expression for the Lagrangian retains the same form. All of the effectsof the field modes above the new cutoff can be absorbed into a redefinition of the coupling constants in theLagrangian. Since coupling constants will be dimensionful quantities (the Lagrangian has units of [energy ],and scalar fields have dimensions of energy) redefinitions of coupling involve powers of the new cutoff scale.If the cutoff scale is large compared to energy levels of interest for the effective theory, then higher-orderterms in the Lagrangian will be suppressed by the new coupling constants g i → g i / ( µ ′ ) n . In the limit whereenergy scales of interest are vanishingly small compared to the cutoff, all terms with high powers of fieldsand their derivatives will be suppressed by inverse powers of the cutoff.Though there is much more to be said about the renormalization group and EFT, there are two majorpoints relevant to the discussion of regularizing vacuum energy. First, one defines a model in EFT withbuilt-in regulators. Renormalization is no longer a primary focus, since the renormalization group techniquesindicate the irrelevance of most nonrenormalizable terms. Since regulators are present in the definition ofthe theory, one needn’t worry about regulators appearing in predictions. As long as the predictions do notdepend sensitively on the precise value of the cutoff—since the value of the physically meaningful cutoff isunknown until a future successor theory is developed—its presence is not a problem in the EFT framework.Thus, the EFT framework motivates Minimal Requirement (2) discussed in the Introduction. However, thevacuum energy is still a problem, since it depends sensitively on the cutoff—as mentioned above, h ρ i ∼ µ .The problem of renormalization changes dramatically under the EFT view, since the presence of µ in Eq. (5)is not in itself a problem. The momentum cutoff is standardly taken to have physical significance for thefuture successor theory; there is therefore no reason to renormalize by subtracting the µ term, and so evenan illusory insensitivity is to µ is lost.Second, by defining models of QFT with a built in lattice scale, issues of Lorentz invariance may loseimportance. If the lattice is to be physically significant, then Lorentz invariance of EFTs only holds ap-proximately. Accordingly, one would not expect the vacuum energy density to be exactly Lorentz invariant,and so the concern regarding the wrong equation of state from Eqs. (5) and (6) is less pressing. However,the failure of exact Lorentz invariance would undermine the motivation to subtract off only the µ termfor a one-loop renormalization, and it would be much harder to input the vacuum energy density into theEinstein field equations. If straightforwardly input into the Einstein field equations as is, one would get anentirely different equation of state for the cosmological constant. Given that the EFT framework is predi-cated on the idea that physics at disparate energy scales separates, it would be curious if a consequence ofthat framework was that small scale violations of Lorentz invariance implied qualitative changes to physicson cosmological scales. In any case, failure of Lorentz invariance would undermine the standard motivationsfor the cosmological constant problem, though the presence of an enormous vacuum energy density for thestandard model would remain. Dimensional regularization has historically played an important role in the development of the standardmodel. ’t Hooft and Veltman (1972) first proved that Yang-Mills gauge models are renormalizable by devel-oping and employing dimensional regularization. The method is often more powerful, since the symmetries ofa model—both gauge symmetries and spacetime symmetries—remain intact. It allows for an easier identifi-cation of divergences than the momentum cutoff approach, and naturally suggests a minimal subtraction (or,alternatively, modified minimal subtraction) method of renormalization. Finally, this method also removesinfrared divergences associated with massless fields without introducing a further regulator. The disadvan-tage is that a physical interpretation for the regulator is rather opaque; the method is more clearly formal The fact that Lorentz invariance is lost if the lattice structure of effective field theories is taken literally should haveobservable consequences. Incredibly sensitive tests have failed to detect violation of Lorentz invariance at small scales (Mattingly2005). Though outside the scope of this paper, one might argue that a literal interpretation of the lattice is therefore unmotivatedfrom the point of view of both QFTs and general relativity. In the case of the vacuum energy density one aims to include its expectation value in the Einstein fieldequations. It is therefore important to ensure that the Lorentz symmetry of the expression is maintained—since it is this feature of h ρ i that justifies its interpretation as a contribution to the cosmological constant.Dimensional regularization is best suited for this purpose. I will outline the regularization technique forvacuum energy for a scalar field. As Martin (2012, Sec. VII) demonstrates, the calculations for fermions andgauge bosons proceeds in a similar fashion, though the leading multiplicative coefficients (of O (1)) differ.The integral for energy density in Eq. (3), in D -dimensional spacetime becomes h ρ i = µ − D π ) D − Z d D − k ω k (9)= µ − D π ) D − Z ∞ dk d D − Ω k D − ω k , (10)where d D − Ω is the volume element of the ( D − µ is an arbitrary scale factor such thatthe equation has the right unit dimensions. Using the fact that the general solution of angular integralscan be expressed in terms of gamma functions, the solution to this integral is h ρ i = µ π ) ( D − / Γ( − D/ − / (cid:18) mµ (cid:19) D . (11)Performing the same operation for the pressure, one obtains h p i = µ (4 − D ) D − π ) D − Z d D − k k ω k (12)= µ π ) ( D − / Γ( − D/ / (cid:18) mµ (cid:19) D . (13)Since Γ( − /
2) = − / h p i / h ρ i = −
1. If one expands the gammafunctions in the above expressions, and sets D = 4 − ǫ , then the regularized h ρ i and a one-loop renormalizedexpression h ˜ ρ dim i are h ρ i ≈ − m π (cid:18) ǫ + 32 − γ − ln (cid:20) m πµ (cid:21)(cid:19) + · · · (14) h ˜ ρ dim i = m π ln (cid:18) m µ (cid:19) , (15)where γ ≈ − . This expression actually agrees (up to constants of O (1)) with the leading orderlogarithmic term predicted using the momentum cutoff approach in Eq. 5, after subtraction of the µ term.Martin (2012) notes that “it is well-known that the dimensional regularization scheme removes the power lawterms,” (p. 13) so this is not a surprising result. Like in the case of Yang-Mills gauge models, dimensionalregularization leaves the underlying symmetries of the model intact, and leads to a correct regularizationthat respects those symmetries. We see that, instead of a functional dependence on the fourth power of This is only a disadvantage if one expects a regulator to be physically significant. If regularization is treated simply as aprocedure for taming divergences, then the regulators need not have a physical significance. Further, if the analogy betweenlattice regularization in condensed matter physics and particle physics is misleading, then the physical interpretation thatlattice regularization provides may actually lead to an unjustified physical interpretation (cf. D. Fraser (2018) and D. Fraserand Koberinski (2016)). I use µ as an arbitrary scale factor here because it appears in the formal expression for h ˜ ρ dim i in the same way that the(arbitrary) momentum cutoff appears in the lattice regularized expression. The fact that these scales have different meaningssupports my argument that these terms differ significantly. The same term for the regulator is used simply to aid algebraiccomparison. This is a first-order renormalized calculation. As Martin (2012, Sec. VI) highlights, this prediction is largely unchangedunder a Gaussian approximation to an interaction term (i.e., to one loop). Since the expression remains the same, I refer toEq. (15) as a one-loop renormalized term. mass of thatfield. This means that massless fields (photons, gluons) do not contribute to the dimensionally regularizedor renormalized vacuum energy, at least to leading order.It turns out that fermion fields and boson fields share this functional dependence, though each containsa numerical factor n i to multiply h ρ ren i . For the Higgs scalar, n H = 1; for fermions, n F = −
4; for bosons, n B = 3. Martin (IX 2012, Eq. (516)) determines the vacuum energy density coming from vacuum fluctuations(ignoring early universe phase transitions) to be h ρ SM i = X h ˜ ρ dim i = − × GeV , (16)assuming a scale factor µ ≈ × − GeV , though the prediction is relatively insensitive to the exact valueof µ . This therefore seems like an impressive renormalization and prediction of the vacuum energy from thestandard model. Since modified minimal subtraction is a natural procedure for dimensional regularization,the renormalization method is also justified. However, this term is renormalized to one loop; radiativeinstability will spoil renormalization at higher orders, and thus naturalness fails here as it does for latticeregularization. In general, the contributions from next-to-leading order for h ˜ ρ dim i will be large enough tospoil the renormalization performed at leading order. The functional form of of Eq (15) hides the highsensitivity to the regulator that appears at higher orders.If we treat the standard model as an EFT, we may be justified in trusting predictions of some quantitiesonly up to one-loop. As an example, the Fermi theory of weak interactions in now known to be an effectiveapproximation to the electroweak model, valid for energies far less than the mass of W and Z bosons. The Fermi theory is well-behaved up to one-loop, but is nonrenormalizable and badly divergent beyond thisscale. The difference with the vacuum energy density is that h ρ i displays the same types of nonrenormalizabledivergence at every order, while more severe divergences occur in the Fermi theory only at higher order thanthe one-loop terms.The proper focus of our attention should therefore be the regularized term (Eq. (14)). As should beobvious by inspection, this value displays a sensitive dependence on the regulator ǫ , and differs markedlyfrom the lattice regularized quantity (Eq. (5)). Thus h ρ i fails to satisfy either Minimal Requirement underorthodox approaches. One might argue that this failure is worse in the EFT framework, since EFTs areexplicitly constructed to exclude contributions from certain energy scales. In the next section, I use morerigorous extensions of standard QFT to show that, even outside of the EFT framework, one should notexpect QFTs to describe vacuum energy. Outside of the mainstream work in QFT and particle physics, there has been persistent effort to place theQFT formalism on more secure mathematical footing. One major goal of this work is to be clear aboutthe validity of assumptions and algebraic manipulations standardly employed in particle physics. Point-splitting procedures are used to track more carefully the ways in which quantum fields—as operator-valueddistributions—are multiplied together at coincident points. The project of doing QFT on curved spacetimeslikewise demands a re-examination of the assumptions that go into constructing QFTs in Minkowski space-time. In this section I discuss the Epstein-Glaser point-splitting procedure as a candidate regularizationscheme, and consider the modifications needed to put QFT on curved spacetimes, a project largely pursuedby Hollands and Wald. The modifications necessary indicate that Minkowski spacetime is particularly spe-cial, and that significant alterations to QFT may be needed even for a semiclassical merging with generalrelativity. If one hopes for an extension of QFT beyond the EFT framework, approaches like these are alikely first step. We see in both approaches that the vacuum energy concept does not arise, indicating that h ρ i is not a meaningful concept in QFT as a whole. Point splitting and other local approaches to regularization stem from Wilson’s (1969) early work on theoperator product expansion, which is a formalism for defining products of operator-valued distributions This example is discussed in more detail in Sec. 4.
8t coincident points. Since we are concerned here with short distance behaviour of fields, the work inthis tradition uses the position space representation of quantum fields. In ordinary approaches to QFT,distributions are not carefully handled, and this leads to divergences in products of operators at the samepoint. Wilson originally proposed an ansatz that two operators A and B defined at coincident points shouldbe described by A ( x ) B ( x ) = lim χ → A ( x + χ/ B ( x − χ/
2) = lim χ → n X i =1 c i ( χ, x ) C i ( x ) + D ( x, χ ) , (17)with C i ( x ), D ( x, χ ) local operators without divergences, and c i ( χ, x ) coefficients that diverge in the limit χ →
0. The original operator product is then replaced with the regularized product " A ( x + χ/ B ( x + χ/ − n X i =1 c i ( χ, x ) C i ( x ) /c n ( χ, x ) , (18)which goes to zero as χ goes to zero.Further work on the general properties of products of distributions—as mathematical physicists cameto understand that quantum fields are operator-valued distributions—led to the Epstein-Glaser approach toregularizing and renormalizing QFTs. The conceptual move here involves switching focus from products ofobservables in neighbouring points to the products of fields at coincident points.Epstein and Glaser (1973) proved—through more careful analysis of the properties of the S-matrix—thata renormalized perturbation theory could still obey microcausality and unitarity. Though a more mathe-matically technical and indirect regularization method, this approach tames many UV divergences presentin QFT, and therefore accomplishes renormalization in a similar way. Essentially the n-point functions mustbe appropriately smeared with test functions f ( x , . . . , x n ) ≡ f ( x ). Infrared divergences are dealt with bycarefully removing the test functions in observable quantities; one takes the adiabatic limit f ( x ) → T n ( x , . . . , x n ) when the set of { T m | ≤ m ≤ n − } are known. These n-pointdistributions are related to the perturbative construction of the S-matrix as follows: S ( f ) = + ∞ X n =1 n ! Z d x · · · d x n T n ( x , . . . , x n ) f ( x ) · · · f ( x n ) , (19)where f is a complex-valued test function, and where the limit f → f and f have disjoint supports in time, supp f ⊂ { x ∈ M | x ∈ ( −∞ , r ) } supp f ⊂ { x ∈ M | x ∈ ( r, ∞ ) } . (20)Then the causality condition is the requirement that S ( f + f ) = S ( f ) S ( f ).The T n ( x , . . . , x n )—operator-valued distributions—are constructed by induction. One simplifies theprocedure by decomposing the T n into (normal-ordered) free fields and complex number-valued distributions T n ( x , . . . , x n ) = X k : Y j ¯ ψ ( x j ) t kn ( x , . . . , x n ) Y l ψ ( x l ) :: Y m A ( x m ) : , (21)where t kn is the momentum space numerical distribution. Now the problem switches from defining an appro-priate splitting procedure for the T n , to the simpler problem of defining a splitting procedure for the t kn . Theusual procedure—in standard versions of interacting QFT—involves splitting with a series of Θ functionsfor each x i ∈ { x n } , but this is discontinuous as x i = 0. If t kn is singular for some x i = 0, then the productis not well defined, and UV divergences appear. Instead, one introduces the concept of a scaling dimension The treatment of point-splitting in this section follows the presentation in Scharf (1995, Ch. 3). , signalling the degree of divergence for the distribution. This scaling dimension carries over to momentumspace representations as well.For QED in momentum space, distributions properly split have a series of free parameters, being definedonly up to a polynomial of rank ω . The “regularized” distributions therefore take the form t ( p ) = t ′ ( p ) + ω X | a | =0 C a p a (22)after splitting, where t ′ ( p ) is defined by the causality condition. The free parameters { C a } can be fixedby an appropriate choice of regulator on the distribution, and this is why, for all practical purposes, thecausality condition is a more mathematically rigorous way to introduce regulators into the theory. Thoughno UV divergent terms appear within this formalism, one still has to introduce arbitrary parameters toregularize the otherwise ill-defined distributions. Regarding the Minkowski vacuum energy, one can seefrom Eq. (21) that the distributions are expanded in terms of normal-ordered free fields, which implies avanishing vacuum energy density, regardless of the particular choice of renormalization of the distributions.The normal ordering here may be thought of as removing h ρ i by fiat. In light of its irrelevance to flat spacecalculations in QFT, and its apparent sensitivity to high-energy physics, we should not be surprised that arigorous construction of QFT would consciously exclude vacuum energy. Instead of altering the conceptual foundations of general relativity to fit particle physics, some physicists haveinstead attempted to formulate QFTs on a classical curved spacetime background. This provides a different“first step” to unifying the two disciplines. One advantage to this approach is that it is on much more soundmathematical footing than standard treatments of QFT. The clarity that comes with mathematical rigourhelps for understanding the nature of assumptions that are needed for defining products of quantum fields.In particular, careful attention should be paid to the splitting procedures used for defining time-orderedproducts of operators. The downfall of such rigour, however, is that realistic interactions cannot yet beformulated fully as models of the axioms. A mix of methodology is therefore the clearest way forward.As discussed in the previous section, point-splitting procedures have been successfully employed in theconstruction of quantum electrodynamics, and more local modifications are currently used for generalizingQFT to generically curved spacetimes. Many people are working on defining QFTs in curved spacetimes,but the most demanding requirements of locality come from the work of Hollands and Wald (2001; 2002;2008; 2010). A key procedure in their construction of local, covariant time-ordered products is a modifiedversion of the Epstein-Glaser point splitting prescription.The Epstein Glaser approach to defining operator-valued distributions is more local than the standardmomentum space cutoff approaches, in that it can be done in small neighbourhoods of coordinates in positionspace. Hollands and Wald note, however, thatthe Epstein-Glaser method is not local in a strong enough sense for our purposes, since we needto ensure that the renormalized time ordered products will be local, covariant fields. A key stepin the Epstein-Glaser regularization procedure is the introduction of certain “cutoff functions”of compact support in the “relative coordinates” that equal 1 in a neighborhood of [conincidentpoints. . . These] will not depend only on the metric in an arbitrary small neighborhood of p and,thus, will not depend locally and covariantly on the metric in the sense required by conditiont1 [of locality and general covariance]. There does not appear to be any straightforward way ofmodifying the Epstein-Glaser regularization procedure so that the resulting extension [. . . ] willsatisfy property t1. In particular, serious convergence difficulties arise if one attempts to shrinkthe support of the cutoff functions (Hollands and Wald 2002, p. 322).Since they aim to define quantum fields on generic globally hyperbolic spacetimes, Hollands and Waldaim to respect the restrictions imposed by the general covariance of general relativity, and therefore to define It is possible that ω will not be an integer for some distributions, though this does not occur in QED. When ω is not aninteger, the polynomial will be rank ω ′ , the largest integer that is less than ω . Since vacuum energy is not fully renormalizable, the “old-fashioned” view of QFTs—as only well-definedif renormalizable—would lead one to believe that the vacuum expectation value of energy is an ill-definedconcept in this framework. But with the interpretation of the standard model as an EFT, full renormaliz-ability is no longer a strict requirement. Using a Euclidean lattice formulation of a particular model of QFTwith a momentum regulator (cf. Section 2.1), nonrenormalizable terms in the Lagrangian are suppressedby powers of the cutoff. If the cutoff is taken to be a physically meaningful quantity, then there is an ac-companying physical interpretation that, at energy scales far below the cutoff, nonrenormalizable terms willbe heavily suppressed and therefore of little relevance. These arguments are based on the renormalizationgroup analysis of irrelevant terms in the Lagrangian; marginal terms are the ones found to play a role at allenergy scales, while relevant terms grow in relative importance at low energies.Unfortunately for the standard EFT view, the vacuum energy is one of two seemingly physically significant Technically, old demands of renormalizability were imposed on the S-matrix of a model of QFT, believed to encode allphysically meaningful content of scattering amplitudes and other dynamics (Dyson 1949; ’t Hooft and Veltman 1972). TheQFTs comprising the standard model of particle physics are all renormalizable, despite the fact that the vacuum energy foreach is nonrenormalizable. If one demands renormalizability of a model in terms of its S-matrix, additional nonrenormalizablestructure that can be extracted from the action should be thought of as ill-defined surplus structure, about which the theoryremains silent. The EFTapproach licences taking nonrenormalizable terms to be physically significant, but vacuum energy doesnot fit into the standard physical interpretation, since it is not suppressed by powers of the cutoff. Byinsisting that the vacuum energy is physically significant, this problem of nonrenormalizability is one part ofthe cosmological constant problem. In response, one can reject the assumption that the vacuum energy aspredicted by the standard model is physically meaningful, or one can weaken the demand of renormalizabilityto understand what QFTs tell us about the value of the vacuum energy.I have adopted this latter approach in this paper. By dropping the requirement of renormalizability, weare left with either regularized, or one-loop renormalized quantities describing vacuum energy density. In theIntroduction, I claimed that two minimal Reasonable Requirements for a quantity to count as a candidateprediction are the following.
Minimal Requirements for Candidate Predictions:
In order for a quantity within a model of QFTto count as a candidate prediction of some corresponding physical quantity, it must be the case that: (1) thequantity is largely insensitive to the regularization procedure; and (2) it is largely insensitive to changes tothe value of the regulator.
Since regularization procedures in QFT are somewhat arbitrary, and usually the regulator disappearsfrom the final prediction of a physical quantity, one might expect that full independence of the regular-ization technique be required. This seems like too strict a condition, however, when one considers thatregularization changes the form of a model of QFT. Different changes will lead to different regulators, andfull renormalization is required to make these different approaches agree. Under the standard EFT view,one can think of the different regularization schemes as different ways of parameterizing our ignorance ofhigh-energy physics. One can only trust the predictions of an EFT when these differences wash out, whichhappens when the Minimal Requirements are satisfied.For the orthodox regularization schemes discussed in Section 2, a purely regularized vacuum energydensity fails to meet either of the Minimal Requirements. The lattice regularized expression depends onthe large-momentum cutoff µ as h ρ i ∼ µ , while the dimensionally regularized term depends on the smalldeviation from four dimensions ǫ as 1 /ǫ . Small changes to these regulators will lead to large changes in h ρ i . Further, the expressions in Eqs. (5) and (14) are quite different, so the value of h ρ i is sensitive to theregularization procedure. The two vacua described under these procedures even differ in their equation ofstate.If one rejects requirement (1), and takes the one-loop renormalized value of h ρ SM i as a first orderprediction, then one has a candidate prediction for vacuum energy density that can be used to motivatea cosmological constant problem. However, there are two issues here. First, renormalized quantities inQFTs aren’t taken as predictions of some physical quantity. After renormalization, the physical value ismeasured from experiment and input into the theory. In this sense, Eq. (16) would not count as a predictionof vacuum energy density, but would be tuned to give the measured value. The instability of h ρ i underradiative corrections makes this tuning impossible perturbatively; so the failure of naturalness prevents aconsistent tuning. Second, this prediction is not straightforwardly compatible with EFT, which I have takento justify the search for a nonrenormalizable candidate prediction of h ρ i .To see this, consider the case of the Fermi model of weak interactions. This is a model in which fourfermions—a proton, neutron, electron, and muon—all interact at a point. This model is not fully renormaliz-able, but it is one-loop renormalizable. Physicists used this model to make predictions at the one-loop level,even though higher order terms were known to diverge. The success of the Fermi model can be explained bynoting that it is an effective theory of the electroweak model. Nonrenormalizable terms that appear abovethe one-loop level are due to the absence in the Fermi model of the W boson to mediate the four-fermioninteraction. These divergent terms end up being irrelevant under renormalization group flow, so the massscale ( M W ≈ m F ≈ m F ≪ M W . The other, of course, being the Higgs mass. In that case the physical significance is undeniable, since the Higgs boson hasbeen discovered, and has mass about 125GeV (CMS-collaboration et al. 2019). The physical significance of vacuum energy isa bit less direct, and is subject to criticism. Aside from the criticism raised in this paper, see Bianchi and Rovelli (2010). h ρ SM i . This is one way of expressing the requirement that vacuum energy be natural. However, h ρ i is relevant under renormalization group flow, and should depend quartically on a cutoff supplied by atheory to which the standard model is effective. Given that the quantity h ρ i is so sensitive to the valueof the regulator beyond one-loop, I take this to disqualify it as a candidate prediction. From within thestandard model, we have reason to believe that h ρ i depends sensitively on the details of high-energy physics,and therefore falls outside the scope of EFT. Even if one rejects the Minimal Requirements and takes h ρ SM i as a candidate prediction, when factoring in all fundamental fields in the standard model, the value h ρ SM i is approximately 55 orders of magnitude too large. While much smaller than the often quotes 120 ordersof magnitude, this is still a remarkably bad prediction. Given its independence from all predictions withinorthodox QFT, one should therefore be skeptical of such a prediction (cf. Koberinski and Smeenk (2021) forfurther discussion).If standard EFT methods do not provide a candidate prediction of h ρ i , should we expect more rigorous ex-tensions of QFT to incorporate vacuum energy? Normal ordering procedures—including the Epstein-Glaserapproach—define all vacuum expectation values to vanish, so in a sense these approaches “renormalize” thevacuum energy density to zero. Normal ordering is typically defined for free fields, and as we have seenfor orthodox approaches, the presence of interactions can spoil renormalizability. The Epstein-Glaser pointsplitting approach treats regularization and renormalization in a very different way, and relates UV diver-gences to ill-defined products of distributions at singular points. By carefully splitting distributions, oneavoids divergent integrals. However, there is still freedom in the definition of these distributions, and thisamounts to renormalization in a similar manner: free parameters in the theory must be fixed by experiment.These numerical distributions are then used to define operator-valued distributions, which include normal-ordered free fields. So in this formalism, normal ordering is directly connected to meaningful time-orderedproducts (equivalently, n-point functions), and so Epstein-Glaser point splitting leads to a vanishing vacuumexpectation value of all quantities, energy density included.Finally, the Hollands and Wald approach to QFTs in curved spacetime significantly alters and extendsthe core concepts of perturbative QFT on Minkowski spacetime. Their approach to merging QFT andgeneral relativity is to reformulate the principles of QFT to be compatible with the spacetime structureof generic globally hyperbolic solutions to the Einstein field equations. For QFTs on curved spacetimes,analogs to Lorentz covariance and global frequency splitting—general covariance and the microlocal spectrumcondition—change the mathematical formalism significantly. Even more significantly, vacuum states aregenerically ill-defined, and so vacuum expectation values cannot be the primary building blocks of n-pointfunctions. Hollands and Wald (2008) have suggested that the operator product expansion coefficients couldbe used to define a model of QFT. In highly symmetric cases, one may recover a vacuum state as a derivedconcept; it would then make sense to discuss vacuum energy densities, but this would be highly dependent onthe particular spacetime chosen. A Lorentz-invariant vacuum energy density is not a generic feature of localcovariant QFT, and there is no guarantee that the Minkowski prediction in this radically different formalismwould agree with one of the orthodox schemes. These extensions of the standard QFT framework supportthe conclusion that QFTs (considered as EFTs or otherwise) do not properly include vacuum energy density. h ρ i from the standard model Does QFT in general—or the standard model in particular—predict a vacuum expectation value of en-ergy density? According to the Minimal Requirements motivated by viewing the standard model as anEFT, it does not. We have seen that under the orthodox approaches to regularization, vacuum energydensity varies significantly with the choice of regularization scheme—lattice regularization or dimensionalregularization—and the “predicted” value of h ρ i is sensitively dependent on the value of the regulator. Ifwe reject Requirement (1), then one might be in a position to pick the dimensionally regularized quantityas a candidate prediction. In order to do so, one must first acknowledge that h ρ i falls outside the domain oftypical quantities in EFTs. One of the remarkable features of thinking of the standard model as an EFT isthat “the details of physics below the cutoff have almost no empirical consequences for large-scale physics”(Wallace 2018, p. 10, emphasis original). By rejecting Requirement (1), we are admitting that, for somephysically meaningful quantities in the EFT, the choice of regularization scheme—of how to parameterize13gnorance of high-energy physics—makes a considerable difference to the predicted value of that quantitywithin QFT. Moreover, this would also amount to claiming that dimensional regularization is the correct way to do so in this instance. Instead, one should acknowledge that the sensitivity to regularization schemeis a sign that the quantity falls outside the scope of the EFT.If one still insists on prioritizing dimensional regularization, then one must renormalize the vacuum energydensity at one-loop in order to satisfy Requirement (2). Though the value h ρ SM i = − × GeV appearsinsensitive to the regulator (Requirement (2)), this is only because high sensitivities at higher orders arehidden by brute truncation. The quantity is not perturbatively renormalizable, and new sensitivities to theregulator ǫ will appear at each order. Further, there is no principled reason to pick any given order at whichto renormalize. Since the divergences are of the same character at each order, and since the regulator makesthe same order of contributions at each order, the only principled choice is to renormalize nonperturbatively.Since this cannot be done with the vacuum energy density, there is no reason to renormalize perturbativelyat any particular order. If renormalization at, e.g, one-loop level yielded a sensible prediction, then theremight be a post-hoc justification. But since h ρ SM i is still so far off the from the observed value, this seemslike an unjustified relaxation of the Minimal Requirements, and indicates that the quantity h ρ SM i lacksphysical significance.I argue that both Minimal Requirements are needed for a quantity to count as a candidate prediction ofsome corresponding physical quantity under the EFT framework. This is a hallmark of all other predictionsof QFTs, and is not satisfied in the case of vacuum energy density. Since there is no direct evidencenecessitating a physically significant vacuum energy density in QFTs, I do not think we have grounds fora candidate prediction. Under the standard view, vacuum energy density should be treated as analogousto droplet formation in fluid mechanics: outside the scope of the EFT, and requiring the details of thehigh-energy theory in order to make sense. Just as we don’t expect fluid mechanics to provide the details ofdroplet formation, we should not expect the standard model to predict the value of vacuum energy density.To be clear, I have not argued that the concept of vacuum energy density is meaningless; it is simply outsidethe scope of EFT. An alternative approach is to extend and modify QFT to better fit with the principlesof general relativity, as outlined in Sec. 3.2. In particular, the concept of the vacuum will likely requiresignificant revision. The absence of h ρ i from local extensions of QFT mentioned in Sec. 3 suggests furtherthat vacuum energy is not a proper part of the physical content of QFT. The cosmological constant problemshould be understood as indicating some inconsistency in merging Minkowski QFTs with general relativity atthe level of EFTs (Koberinski and Smeenk 2021). In particular, the presence of a large effective cosmologicalconstant undermines the initial assumption that Minkowski spacetime is a good approximation to the morerealistic curved spacetime. The work of Hollands and Wald highlights how much of the formalism may needto change if one wants to make QFTs conceptually compatible with the general covariance and locality ofgeneral relativity. Perhaps the resulting conceptual clarity will also serve to clear up the concept of vacuumenergy density as well.The cosmological constant problem does require some sort of (dis)solution. By investigating the founda-tions of QFT, it is increasingly clear that at least part of the problem lies in accepting that the standardmodel provides a candidate prediction of h ρ i . Acknowledgements
Removed for review I am grateful to Chris Smeenk, Robert Brandenberger, Doreen Fraser, and the UCIPhilosophy of Physics Research group for helpful feedback on early drafts of this paper, as well as thecomments from two anonymous reviewers. This work was supported by the Social Sciences and HumanitiesResearch Council of Canada, and the John Templeton Foundation Grant 61048,
New Directions in Philosophyof Cosmology . The opinions expressed in this publication are those of the author and do not necessarilyreflect the views of the John Templeton Foundation. Cf. Koberinski (2017) for an argument that the Casimir effect and Lamb shift do not license the inference to a constantvacuum expectation value of energy. eferences ’t Hooft, Gerard and Martinus Veltman (1972). “Regularization and renormalization of gauge fields”. In: Nuclear Physics B
44, pp. 189–213.Aoyama, Tatsumi et al. (2012). “Tenth-order QED contribution to the electron g- 2 and an improved valueof the fine structure constant”. In:
Physical Review Letters
Physical review letters arXiv preprintarXiv:1002.3966 .CMS-collaboration et al. (2019).
A measurement of the Higgs boson mass in the diphoton decay channel .Tech. rep. Tech. Rep. CMSPAS-HIG-19-004, CERN, Geneva, 2019, https://cds. cern. ch . . .Dyson, Freeman J. (1949). “The S matrix in quantum electrodynamics”. In:
Physical Review
Annales del’IHP Physique th´eorique . Vol. 19. 3, pp. 211–295.Fraser, Doreen (2018). “The development of renormalization group methods for particle physics: Formalanalogies between classical statistical mechanics and quantum field theory”. In:
PhilSci-Archive Preprint .Fraser, Doreen and Adam Koberinski (2016). “The Higgs mechanism and superconductivity: A case studyof formal analogies”. In:
Studies in History and Philosophy of Science Part B: Studies in History andPhilosophy of Modern Physics
55, pp. 72–91.Fraser, James Duncan (2017). “The Real Problem with Perturbative Quantum Field Theory”. In:
The BritishJournal for the Philosophy of Science .— (2018). “Renormalization and the formulation of scientific realism”. In:
Philosophy of Science n = 2 Fine Structure”. In: Phys. Rev. Lett.
125 (7), p. 073002. doi : . url : https://link.aps.org/doi/10.1103/PhysRevLett.125.073002 .Hollands, Stefan and Robert M Wald (2001). “Local Wick Polynomials and Time Ordered Products ofQuantum Fields in Curved Spacetime”. In: Communications in Mathematical Physics
Communications in mathematical physics
International Journal of Modern Physics D
Communications in MathematicalPhysics
Philosophy Beyond Space-time . Ed. by Christian W¨uthrich, Baptiste Le Bihan, and Nick Hugget. Forthcoming, http://philsci-archive.pitt.edu/14244/ .Oxford University Press.Koberinski, Adam and Chris Smeenk (2020). “Q.E.D., QED”. In:
Studies in History and Philosophy of Sci-ence Part B: Studies in History and Philosophy of Modern Physics
71, pp. 1–13. issn : 1355-2198. doi : https://doi.org/10.1016/j.shpsb.2020.03.003 . url : .— (2021). “Effective field theory and the failure of naturalness in the cosmological constant problem”. In: under review in Studies in History and Philosophy of Science Part B: Studies in History and Philosophyof Modern Physics .Martin, Jerome (2012). “Everything you always wanted to know about the cosmological constant problem(but were afraid to ask)”. In: Comptes Rendus Physique
Living Reviews in relativity
Studies in History and Philosophy of SciencePart B: Studies in History and Philosophy of Modern Physics
68, pp. 23–39. issn : 1355-2198. doi : https://doi.org/10.1016/j.shpsb.2019.04.006 . url : .Rosaler, Joshua and Robert Harlander (2019). “Naturalness, Wilsonian renormalization, and “fundamentalparameters” in quantum field theory”. In: Studies in History and Philosophy of Science Part B: Studiesin History and Philosophy of Modern Physics
66, pp. 118–134.15uetsche, Laura (2020). “Perturbing realism”. In:
Scientific Realism and the Quantum . Ed. by Steven Frenchand Juha Saatsi, p. 293.Saunders, Simon (2002). “Is the Zero-Point Energy Real?” In:
Ontological aspects of quantum field theory ,pp. 313–343.Scharf, Gunter (1995).
Finite quantum electrodynamics: The causal approach . Vol. 2. Springer.Schneider, Mike D. (2020). “What’s the problem with the cosmological constant?” In:
Philosophy of Science
Studies in History and Philosophy of Modern Physics
42, pp. 116–125.— (2018). “The quantum theory of fields”. In: forthcoming in
The Routledge Companion to the Philosophy ofPhysics . Ed. by Eleanor Knox and Alistair Wilson. Routledge. url : http://philsci-archive.pitt.edu/15296/ .Weinberg, Steven (1989). “The cosmological constant problem”. In: Reviews of modern physics
The British Journal for the Philosophy ofScience .— (2018). “Renormalization Group Methods”. In: forthcoming in