Complex Action, Prearrangement for Future and Higgs Broadening
aa r X i v : . [ phy s i c s . g e n - ph ] N ov Complex Action, Prearrengement for Future and Higgs Broadening
Holger Bech
Nielsen , ∗ ) and Masao Ninomiya , ∗∗ )1 Niels Bohr Institute, University of Copenhagen,17 Blegdamsvej Copenhagen ø, Denmark Yukawa Institute for Theoretical Physics, Kyoto University,Kyoto 606-8502, JapanPACS numbers: 12.90.tb, 14.80.cpand, N1.10.-Z
We develop some formalism which is very general Feynman path integral in the case ofthe action which is allowed to be complex. The major point is that the effect of the imaginarypart of the action (mainly) is to determine which solution to the equations of motion getsrealized. We can therefore look at the model as a unification of the equations of motionand the “initial conditions”. We have already earlier argued for some features matching wellwith cosmology coming out of the model.A Hamiltonian formalism is put forward, but it still has to have an extra factor in theprobability of a certain measurement result involving the time after the measurement time.A special effect to be discussed is a broadening of the width of the Higgs particle. Wereach crudely a changed Breit-Wigner formula that is a normalized square root of the origi-nally expected one. §
1. Introduction
We have already in a series of articles studied a model in which the initialstate of the Universe is described by a probability density P in phase space, whichcan and is assumed to depend on what happens along the solution associated at alltimes in a formally time translational invariant manner. We shall here repeat andexpand on the claim that allowing the action to be complex is rather to be consideredas making an assumption less than being a new assumption. In fact we could lookat the Feynman path integral: Z e i ~ S [ path ] Dpath. (1)Then we notice that whether the action S [ path ] as usually assumed is real or whetherit, as in the present article, should be taken to be complex, the integrand e i ~ S [ path ] ofthe Feynman-path-way integral is anyway complex. Let us then argue that thinkingof the Feynman path integral as the fundamental representation of quantum mechan-ics it is the integrand e i ~ S [ path ] rather than S [ path ] itself, which is just its logarithm,that is the most fundamental. In this light it looks rather strange to impose thereality condition that S [ path ] should be real. If anything one would have though it ∗ ) On leave of absence to CERN until 31 May, 2008. ∗∗ ) Also working at Okayama Institute for Quantum Physics, Kyoyama-cho 1-9, Okayama-city700-0015, Japan. typeset using
PTP
TEX.cls h Ver.0.9 i H.B. Nielsen and M. Ninomiya would be more natural to impose reality on the full integrand e i ~ S [ path ] , an idea thatof course would not work at all phenomenologically.But the model that there is no reality restriction on the integrand e i ~ S [ path ] atall and thus also no reality restriction on S [ path ] could be quite natural and it is-we could say- the goal of the present article to look for implications of such an ina sense simpler model than the usual “action-being-real-picture”. That is to say weshall imagine the action S [ path ] to be indeed complex S [ path ] = S R [ path ] + iS I [ path ] . (2)The natural -but not strongly grounded- assumption would then be that both thereal part S R [ path ] and the imaginary part of the action can -for instance in theStandard Model- be written as a four dimensional integrals S R [ path ] = Z L R ( x ) d x,S I [ path ] = Z L I ( x ) d x (3)where the complex Lagrangian density L ( x ) = L R ( x ) + iL I ( x ) (4)was split up into the real L R and imaginary L I parts each of which is assumed tobe of the same form as the usual Standard Model Lagrangian density. Howeverthe coefficients to the various terms could be different for real and imaginary part.We could say that the fields, the gauge fields A aµ ( x ), and the fermion fields ψ ( f ) α ( x )and the Higgs field φ HIGGS ( x ) obey the same reality conditions as usual (in theStandard Model) so that the action is only made complex by letting the coefficients − g a , Z ( f ) , m H , Z HIGGS and λ in the Lagrangian density L ( x ) = X a − g a F aµν ( x ) F aµν ( x ) + Z ( f ) ψ ( f ) / Dψ ( f ) + Z HIGGS | D µ φ HIGGS ( x ) | − m H | φ HIGGS ( x ) | − λ | φ HIGGS ( x ) | (5)be complex. For instance we imagine the Higgs-mass square coefficient to split upinto a real and an imaginary part m H = m HR + im HI . (6)In the following section 2, we shall argue that in the classical approximationas usually extracted from the Feynman path integral it is only the real part of theaction S R [ path ] that matters. In the following section 3, we then review that therole of the imaginary part S I [ path ] is to give the probability density P ∝ e − ~ S I [ path ] for a certain solution path being realized and we shall explain how the imaginarypart S I takes over the role of the boundary conditions so that we can indeed work omplex Action, Prearrengement for Future and Higgs Broadening §
2. Classical approximation only uses S R [ path ] Since the usual theory works well without any imaginary part S I in the action wemust for good phenomenology in first approximation have that this imaginary partis quite hidden. Here we shall now show that as far as the classical approximation tothe model is concerned the effect of S I [ path ] is indeed negligible and the equationsof motion take the almost usual form δS R = 0 , (7)just it is the real part S R rather than the full action S = S R + iS I which determinesthe classical equations of motion.The argument for the relevance of only the real part S R in the classical equationof motion is rather simple if one remembers how the classical equations of motionare derived from the path-way-integral in the usual case with its only real action.The argument really runs like this: When we have that δS R = 0 it means that thereal part of the action S R varies approximately linearly under variation of the path(in the space of all the paths) in a neighbourhood. This, however, means then thatthe factor e i ~ S R in the Feynman path way integrand oscillate in sign or rather inphase so that -unless the further factor e − SI ~ varies extremely fast- the contributionswith the factor e i ~ S R deviating in sign (by just a minus say) will roughly cancel out.So locally we have essentially cancelling out of neighbouring contributions in anyneighbourhood where δS R = 0. We can say that this cancellation gets f ormally veryperfect in the limit of the coefficient ~ → ∞ in front of S R in the exponents. This isactually the type of argument used in the usual case of only S R being present: Wecan even count on the linear term in the Taylor expansion of S R = S R [ path ] + ∆path · δS R δpath + 12 ( ∆path ) · δ S R δpath δpath + · · · (8) H.B. Nielsen and M. Ninomiya dominating the phase rotation for ~ small, when we look at a region of the orderof the phase rotation “wave length”. If indeed also the S I varies with the samerate the argument strictly speaking breaks down even for ~ being small. We may,however, argue that in order to find a highly contributing path we shall search ina region not so far from a minimum of S I and thus S I will vary relatively slowly -but also we may be interested in S R near an extremum so this does not really meanthat we can count on the rate of variations being so different. We may make theargumentation for that in pracsis the variation of S I is not so strong compared tothe variation of the real part S R better refering to that we in early articles on ourmodel have argued for that the contributions to S I from the present cosmological eraare especially low compared to the more normal size ones from some very early bigbang era. The point indeed were that in the present times we are mainly concernedwith massless particles -for which the eigentimes are always zero- or non relativisticconserved particles for which the eigentimes are approximately the coordinate timeand at the end given just as the universe lifetime. Since the density of particles andthereby the interaction is also today low compared to early cosmological big bangtimes the contributions today to the imaginary part S I would mainly come fromthe passages of the particles from one interaction to the next one and thus like weknow for the real part due to Lorentz invariant requirements be proportional to theeigentimes: S I contribution ∝ τ eigen . (9)Since these eigentimes as we just said are rather trivial, zero or constant, in thetoday era we expect by far the most impotant variations of the imaginary action S I [ path ] to come from variations of the path in the Big Bang times rather than inour times.Thus essentially when we discuss variations of the path w.r.t. variable varia-tions in our times we expect δS I to be small and the cancellation to occur unless δS R (cid:12)(cid:12) our time = 0. So we believe to have good arguments for the classical equations ofmotion with only use of the real part S R only to come out even though fundamentallythe action would be comlex S = S R + iS I .As long as we look for regions in real path-space it is, however, clear that it isthe S R that gives the sign oscillation and thus the cancellation effect wherever then S R / ≃
0. This in turn means that we only obtain appreciable contributions to theFeynman path integral from the neighbourhoods of paths with the property δS R = 0 . (10)This is thus a derivation of the classical equations of motion as an equation to beobeyed for those paths in the neighbourhood of which an appreciable contributioncan arise. Only in the neighbourhoods of the solutions to the classical equations ofmotion δS R = 0 do the different noighbouring contributions to the Feynman pathintegral act in a collaborative manner so that a big result appears.This result suggesting that it is mainly S R that determines the classical equationof motion is of course rather crucial for our whole idea, because it means that in the omplex Action, Prearrengement for Future and Higgs Broadening S I - we can hope that itis only S R that determines the equations of motion. §
3. Classical meaning of S I Even after we have decided that there are such sign oscillation cancellationsthat all contributions to the Feynman-path integral (and thus assuming Feynmanpath integrals as the fundamental physics) not obeying the classical equations ofmotion δS R [ path ] = 0 completely cancels out, there are still a huge set of classicallyallowed paths obeying these equations of motion. The paths in neighbourhoods -insome crude or principal sense of order ~ expansion- around the classical solutions (to δS R = 0) are not killed by the cancellations and they have still the possibility forbeing important for the description by our model. Now each classical solution, say clsol = some path (11)obeying δS R [ clsol ] = 0 (12)gives like any other path rise to an S I value S I [ clsol ].Even without being so specific as we were in last years Bled-proceeding onthis model but just arguing from what everybody will accept about Feynman-pathintegral interpretation we could say:Clearly the contribution to the Feynman path integral from a specific classicalsolution neighbourhood must contain a factor Z NEIGHBOURHOOD OF clsol ONLY e i ~ S Dpath ∝ e − ~ S I [ clsol ] . (13)Since we all accept a loose statement like “the probability is given by numericallysquaring the Feynman path integral (contribution)” we may accept as almost un-avoidable -whatever the exact interpretation scheme assumed- that the probabilityfor the classical solution clsol being (the?) realized one must be proportional to P [ clsol ] ∝ (cid:12)(cid:12)(cid:12) e − ~ S I [ clsol ] (cid:12)(cid:12)(cid:12) = e − ~ S I [ clsol ] . (14)This probability density over phase space of initial conditions P [ clsol ] were exactlywhat we called P also in the earlier works on our model, where we sought to be moregeneral by not talking about P [ clsol ] being e − ~ S I [ clsol ] but just talking about it as ageneral probability weight the behavior of which could then be discussed separately.Let us stress actually that if you do not say anything about the functionalbehavior of the probability then the formalism with P in our earlier works is sogeneral that it can hardly even be wrong. Of course if you write it as P [ clsol ] = e − ~ S I [ clsol ] and do not assume anything about S I it remains so general that it hardlycan be wrong, because we have just defined S I [ clsol ] = − ~ ·
12 log ( P [ clsol ]) . (15) H.B. Nielsen and M. Ninomiya
However, if we begin to assume that in analogy to the real part of the action S R alsothe imaginary is an integral over time S I [ path ] = Z L I ( t ; path ) dt (16)of some Lagrangian L I in a time translational invariant way or the even more specificform as a space time integral, then we do make nontrivial assumptions about P = e − ~ S I . Usually we would say that we already from well-known (physical) experience,further formalized in the second low of thermodynamics, know that the S I [ clsol ] is only allowed to depend on what goes on along the path clsol at the initial momentof time t = t initial . This “initial time” is imagined to be the time of the Big Bangsingularity -if such a singularity indeed existed-. If there were no such initial time(as we suggested in one of the papers in the series on our imaginary action) thenone might in the usual theory not really know what to do. Perhaps one can use theHartle-Hawking no boundary model, but that would effectively look much like aBig Bang start.But our present article motivating arguments are:1) An imaginary action is an almost milder assumption than assuming it to bezero S I = 0.2) To assume that the essential logarithm of the probability P namely S I shoulddepend only on what goes on at a very special moment of time t = t initial soundsalmost time non-translational invariant. (Here Hartle-Hawking no boundarymay escape elegantly though.)3.1. The classical picture in our model resumed
Let us slightly summarize and put in perspective our classical approximation forour imaginary action model:1) We argued for the classical equations of motion be given alone by the real partof the action δS R = 0 so that the imaginary part S I were not relevant at all, sothat it were in first approximation not so serious classically whether you assumethat S I is there or not.2) We argued that the main role of the imaginary part S I [ clsol ] of the actionwere to give a probability distribution over the “phase space” (it has a naturalsymplectic structure and is if restricted to a certain time t = t simply thephase space) of the set of classical solutions: P [ clsol ] = “normalization” · e − ~ S I [ clsol ] . (17)Since ~ is small this formula for the density presumably very strongly can derivethe “true” solution to almost the one with the minimal -in the sense of the mostnegative- S I [ clsol ]. (But really huge amounts of classical solutions with bigger S I could statistically take over.)3) We argued that in the present era -long after Big Bang- the effects of S I wereat least somewhat suppressed due to that now we mainly have non-relativisticconserved particles or massless particles and not much interactions comparedto early big bang times. omplex Action, Prearrengement for Future and Higgs Broadening S I because the classical solution realizedwill be one with excepionally presumably numerically large negative S I so as to make P ∝ e − ~ S I large. Really we could say that it is as if the universe were governed bya leader seeking as his goal to make the imaginary part S I minimal.3.2. Is it possible that we did not discover these prearrangement?
One reason -and that is an important one- is that the processes in our era involvesmainly conserved non-relativistic particlesor totally massless ones (photons) so thatthe eigentimes which give rise to S I -contributions become rather trivial. But ifreally that were all then this leader of the development of the universe would makegreat efforts to either prevent or favour strongly the various relativistic particlesaccelerators. But one could wonder how we could have discovered whether a certaintype of accelerator were disfavoured, because it would very difficult to know howmany of them should have been built if there were no S I -effects. Such decisions asto what accelerators to build happens as a function of essentialy a series of logical-and thus presumably given by the equations of motion δS R = 0- arguments. Butthen there will be no clear sign that anything were disfavoured or favoured. It mightbe very interesting to look for if there would be any effec t of “influence from thefuture” if one let the running or building of some relativistic particle depend on acard-play or a quantum random number generator. If it were say disfavoured byleading to a positive S I -contribution to run an accelerator of the type in question,then the cards would be prearranged so that the card pulled would mean that oneshould not run the accelerator. By the same decision “don’t run” being given by thecards statistically too many times one might discover such an S I -effect.It could be discovered in principle also us notice surprisingly bad luck for ac-celerators of the disfavoured type. But it is not easy because the unlucky accidentscould go very far back in time: a race or a culture society long in the past thatwould have had better chance tallents or interest for building relativistic high energyaccelerators could have gone extinct. But it would be hard for us to evaluate whichextinct societies in the past had the better potentiality for making high energy accel-erators later on. So it could be difficult to notice such S I -effects even if they manageto keep a certain type collision down both in experimental apparatuses and in thecosmic ray.Only if the bad luck for an accelerator were so lately induced as seemingly werethe case with the S.S.C. collider in Texas, which would have been larger thanL.H.C. but which were stopped after one quarter of the tunnel had been built. Thiswere a case of so remarkably bad luck that we may (almost) take as an evidence forsome S I -effect like effect and that some of the particles to be produced -say Higgses-or destroyed -say baryon-number- made up something unwanted when one seeks tominimize S I . H.B. Nielsen and M. Ninomiya
Do we expect card game experiments to give results?
Before going to quantum mechanics let us a moment estimate how much isneeded for a card game or quantum number generator decission on say the switchingon of a relativistic accelerator could be expected to influence backwards in time thea priori random number (the card pull or the quantum random number) generated:The imaginary action will in both cases accelerator switched on or not switchedon get possibly much bigger contributions from the future. These future contri-butions are from our point of view extremely difficult to calculate, alone e.g. thecomplicated psychological and political consequences of a certain run of the acceler-ator on if and how much it will be switched on later would be exceedingly difficult toestimate. So in pracsis we must suppose that after a certain card game determinedswitch on or off there will come a future witha in pracsis random S I -contribution S I future depending on the switch on or off in a random way . So unless for somereason the contribution from the switch on or switch off time is bigger than or com-parable to the fluctuations with the switch on or off ∆ on/off S I future i.e. unless S I (cid:12)(cid:12)(cid:12) accelerator contribution & ∆ on/off S I future (18)we will not see any effects of S I in such an experiment. Now a very crude firstorientation consists in estimating that the space-time region over which the switchon or off can influence the future is the whole forward light cone starting from theaccelerator decission site.Even if the sensitivity of S I from most of the consequences the on/off decissionmay have by accident in this light cone is appreciably lower than the sensitivityto the particles in the accelerator, there is a huge factor in space-time volume tocompete against in order that (18) shall be fulfilled. This is a big factor even if wetake into account that the light cone space-time volume is random so that it is thesquare (cid:0) ∆ on/off S I future (cid:1) ∝ Vol (light cone) (19)that is proportional to the forward light cone space-time volume rather than thefluctuation itself ∆ on/off S I future ∝ p Vol (light cone) (20)going rather like the square root.3.4.
Hypothetical case of no future influence
If, however, we were thinking of the very unusual case that two different randomnumber decissions had no difference in their future consequences at all, then of coursewe would have no fluctuations in S I from the future and thus ∆ on/off S I future = 0.In such an unrealistic case of all tracks of the decission being immediately totallyhidden there is no way that in our model then the effect from the accelerator on oroff time could be drowned in the future contribution fluctuations. omplex Action, Prearrengement for Future and Higgs Broadening §
4. Quantum effects
Really the at the end of last section mentioned special case of a decission beingquantum random say but being forever hidden so that it cannot influence the futureand thereby the future contribution S I future = R ∞ now L I dt , is the one you have intypical quantum mechanical experiments. In for instance a typical quantum experi-ment one starts by preparing a certain unstable particle and then later measure theenergy of the decay products from the decay.We could in this experiment look at the actual life time of the unstable particle t actual as a quantum random number -a quantum random number decission of theactual life time of just the particle in question-. But if one now measures the energy ofthe decay products -the conjugate variable to the actual time t actual - it is impossiblethat the actual time t actual shall ever been known. So here we have precisely a caseof a decission which is kept absolutely secret. But that then means that the futurecannot know anything about the actual life time t actual and S I future can have no t actual dependence. Thus in this case the fluctuation ∆ t actual S I future = 0 (21)of S I future due to the variation of t actual must be zero. Thus in this case of such ahidden decission there is no way to get the S I contribution from the existence timeof the unstable particle S I (cid:12)(cid:12) from t actual , which is presumably proportional to t actual S I (cid:12)(cid:12) from t actual = Γ I t actual , (22)dominated out by the future contribution S I future . So if truly in some sense thecoefficient here called Γ I giving the S I -contribution S I (cid:12)(cid:12) from t actual is large becauseof being inversely proportional to ~ , then there should be strong effects of S I inthis case, or rather effects that cannot be excused as being just accidental. Reallythe philosophy of our model which we are driving to as that the effects of S I areindeed huge but they come in by prearrengement so that whatever happens comesseemingly for us the likely and natural consequences of what already happened ealier.Thus the fact that certain particles or certain happenings are getting indeed stronglyprevented by such prearrengements is not noticed by us.In the case of an actual decay time t actual which similarly to the slit passed in theby Bohr and Einstein discussed double slit experiment does not have any correlationneither with prior to experiment nor to later than experiment times there is nothingthat can overwrite/dominate the effect out.So we say that in such a never measure but by quantum random number waychosen variable as t actual the S I -effect shoud show up. But now of course there is apriori the difficulty that if precisely the actual lifetime t actual is not measured, thenhow do we know if it were systematicly made shorter in our model than in the realaction model? Well since we do not measure it -if we did we would make the effectbe overshadowed by accidental effects from future- we cannot plot its distributionand check that it is stronger peaked towards zero than the theoretical decay rate0 H.B. Nielsen and M. Ninomiya calculation would say it should be. We can, however, use Heisenberg uncertaintyprinciple and should in our model find that Breit-Wigner distribution of the decayproduct energy (=invariant mass) has been broadened compared to a real actiontheory. Since we shall suggest that it is likely that especially the Higgs particle willshow very big S I -effects it is especially the Higgs Brei t-Wigner we suspect to besignificantly broadened.4.1. Quantum experiment formulation
The typical quantum experiment which we should seek to describe in our modelis of the type that one prepares some state | i i -in the just discussed case an unsta-ble particle, a Higgs e.g.- and then measure an outcome | f i , which in the case wesuggested would be the decay products - b ¯ b jets say- with a given energy or betterinvariant mass. When one has prepared a state | i i it means that one is scientifi-cally sure that one got just that for the subsystem of the universe considered. Thuswhether to reach that state were very suppressed or favoured by the S I -effects doesno longer matter because we know we got it ( | i i ) already. We should therefore soto speak normalize the chance for having gotten | i i to be zero even if this wouldnot be one would have theoretically calculated in our model. One should have inmind that since our model is in principle also a model for the realized solution orthe initial state conditions one could ideal by calculate the probability that at themoment of time of the start of the experiment, say t i , the Universe is indeed in thestate | i i (or that the subsystem of the Universe relevant for the experiment is in astate | i i ). In pracsis of course such calculations are not possible -except perhaps andeven that is optimistic some cosmological questions as the Hubble expansion of theenergy density in the universe-.4.2. Practical quantum calculation, ignoring outside regions in time
In the typical quantum experiment -as already alluded to- we have the systemfirst in a state | i i at t = t i say and then later at t = t f observe it in | f i .Then one would using usual (meaning real action) Feynman path integral for-mulation say that the time development transition amplitude from | i i at t i to | f i at t f is given as h f | U | i i = Z e i ~ S [ path ] h f | path ( t f ) ih path ( t i ) | i i Dpath (23)where h path ( t i ) | i i is the wave function of the state | i i expressed in terms of the fieldconfiguration value path ( t i ) of the path path taken at time t i and h f | path ( t f ) i inthe same way is the wave function for the state | f i expressed by the value of thepath path at time t f . The paths integrated functionally given in (23) are in fact onlypaths describing a thinkable time development in the time interval [ t i , t f ].We can easily say that in our model we now insert our complex S [ path ] insteadof the purely real one in the usual theory. But that is not in principle the full storyin our model for a couple of reasons:If we constructed from (23) all the amplitudes obtained by inserting a completeset of | f i states, say | f j i , j = 1 , , · · · , with h f j | f k i = δ jk instead of | f i and then omplex Action, Prearrengement for Future and Higgs Broadening X j |h f j | U | i i| = (cid:26) , in usual theorynot 1 , in our theory usually (24)we would not in our model get unity in our model such as one gets in the usual realaction theory. This is of course one of the consequences of that our developmentmatrix U (essentially S-matrix) is not unitary.However, we have in our model taken a rather timeless perspective and weespecially take it as given from the outset that the world exists at all times t . So wecannot accept that the probability for the universe existing at a later time should beanything else than unity. So we must take the point of view that when we have seenthat we truly got | i i then the development matrix U (essentially the S-matrix) canonly tell us about the relative probability for the various final state | f i we might askabout, but the probabilities for a complete set must be normalized to unity. Thisargumentation at first suggest the usual expression |h f | U | i i| to be normalized to P ( | f i (cid:12)(cid:12) | i i ) = |h f j | U | i i| || U | i i|| . (25)so that we ensure X j P ( | f j i (cid:12)(cid:12) | i i ) = 1 . (26)However, this expression is not exactly -although presumably a good approximation-to the prediction of our model. The point is that we have in our model even influencefrom the future contribution to S I . Typically we already suggested that these contri-butions S I future would vary strongly -but not in most cases so that we have any wayto know how- and so we really expect that one of the possible measurement results | f j i will be indeed favoured strongly by giving rise to the most negative S I future (cid:12)(cid:12) | f j i .Since we, however, do not know how to calculate which | f j i , j = 1 , , · · · , gives theminimal S I future (cid:12)(cid:12) | f j i . We in pracsis would make the statistical model of putting thisfactor exp n − S I future (cid:12)(cid:12) | f j i o in the probability P ( | f j i (cid:12)(cid:12) | i i ) our model = |h f j | U | i i| || U | i i|| · exp n − S I future (cid:12)(cid:12) | f j i o (27)equal to a constant 1. Only in the case we would decide to use the result j of themeasurement to e.g. decide whether to start or not start some very high S I -procucingaccelerator as presumably S.S.C would have been would we expect that we shoulduse (27) rather than simply (25). But already (25) is interesting and unusual becauseit for instance contains the Higgs broadening effect, which we suggest that one shouldlook for at L.H.C. and the Tevatron. We shall go this in the later sections.2
H.B. Nielsen and M. Ninomiya §
5. Quantum Hamiltonian formalism
Let us, however, first remind a bit about last years Bled talk on this subjectand give a crude idea about one might write Feynman diagrams for evaluation ofour expression (25).First it is rather easy to see that the usual (i.e. with real action) way of derivingthe Hamiltonian development in time takes over practically just by saying that nowthe coefficients in the Lagrangian or Lagrangian density are to be considered complexrather than just real. The transition from Feynman-path integral to a wave functionand Hamiltonian description is, however, whether the Lagrangian is real or notconnected with constructing a measure D in the space of field or variable values ataa given time.Of course the Hamiltonian H derived from the complex action organized to obeysay ddt U ( t f , t i ) = iH (28)will not be Hermitean. That is of course exactly what is connected with U not beingunitary.When talking about the wave function and Hamiltonian formulation we havepresumably the duty to bring up that according to last years proceedings we take aslightly unusual point of view w.r.t. how we apply the Feynman path way integral.Usually one namely only use the Feynman path integral as a mathematical techniquefor solving the Schr¨odinger equation. We use, however, in our model as discussedlast year the Feynman path integral as the fundamental presentation of the model,Hamiltonian or other formulations should be derived from our a little bit unusualdefinition of the theory in terms of the Feynman path integral(s).5.1. Our “fundamental” interpretation
Our slightly modified interpretation of the Feynman path integral is based onthe already stressed observation that the imaginary part S I chooses the initial stateconditions or the actually to be realized solution to the equations of motion. Thisnamely, then means that normal boundary conditions become essentially unimpor-tant and that it is thus most elegant to sum over all possible boundary conditions,so that the imaginary part S I so to speak can be totally free to choose effectively theboundary conditions it would like. Even if one puts in some boundary conditions byhand there only has to be a quite moderate wave function overlap with the initialcondition which “ S I prefers” and that will be the one given the dominant weighteven if the moderate overlap is quite small. The S I in fact goes in the exponent withthe big number ~ as a factor and might easily blow a small overlap up to a big partof the Feynman path integral.We proposed therefore as our outset in the last years proceedings that the prob-ability for the path at some time t passing through a certain range of variables I sothat path ( t ) ∈ I (29) omplex Action, Prearrengement for Future and Higgs Broadening P ( path ( t ) ∈ I ) = X BOUNDARIES (cid:12)(cid:12)(cid:12)(cid:12)Z e i ~ S [ path ] χ [ path ] Dpath (cid:12)(cid:12)(cid:12)(cid:12) X BOUNDARIES (cid:12)(cid:12)(cid:12)(cid:12)Z e i ~ S [ path ] Dpath (cid:12)(cid:12)(cid:12)(cid:12) (30)where the projection functional χ [ path ] = (cid:26) path ( t ) ∈ I path ( t ) I . (31)Here of course path ( t ) stands for the set of values for the fields (or variables q k in thecase of a general analytical mechanical system) at the time t on the path path . The“BOUNDARIES” summed over stands for the boundaries at t → −∞ and t → ∞ or whatever the boundaries of time may be.The special point of our model is that the BOUNDARIES are in first approx-imation not relevant because S I takes over. The details of how to sum over themis thus also not important. A part of last years formalism were to write the wholefunctional integral used in (30) as an inner product of one factor | A ( t ) i from thepast of some time t and one factor | B ( t ) i from the future of time t : h B ( t ) | A ( t ) i = Z e i ~ S [ path ] Dpath. (32)We have then defined the two Hilbert space vectors (describing the whole Universe)by means of path integrals over path’s running respectively over path’s from thebeginnings of times (say time t → −∞ ) up to the considered time t h q | A ( t ) i = Z FOR path
ON [ −∞ , t ] ENDING WITH path ( t ) = q e i ~ S −∞ ,t [ path ] Dpath (33)and over paths from t to the “ends of times” (say t → ∞ ) h B ( t ) | q i = Z OVER path
ON [ t, + ∞ ] BEGINNING WITH path ( t ) = q e i ~ S t, + ∞ [ path ] Dpath. (34)Here of course S −∞ ,t [ path ] = Z t −∞ L ( path (˜ t )) d ˜ t (35)and S t, + ∞ [ path ] = Z + ∞ t L ( path (˜ t )) d ˜ t. (36)In order that these Hilbert space vectors be welldefined one would usually have tospecify the boundary conditions at the beginnings and ends of times, −∞ and + ∞ ,4 H.B. Nielsen and M. Ninomiya but because of the imaginary part S I assumed in the present work it will be so thatit will be extremely difficult to change the results for | A ( t ) i and | B ( t ) i by more thanover all factors by modifying the boundary conditions at −∞ and + ∞ respectively.In this sense we can say that the Hilbert-vectors | A ( t ) i and h B ( t ) | are approximatelydefined without specifying the boundary conditions. Remember it were the mainidea that S I takes over the role of boundary conditions i.e. S I rather than theboundary conditions choose the initial state conditions. With such a philosophy of S I fixing the initial state conditions we might be tempted to interprete | A ( t ) i asthe wave function of the whole Universe at time t derived from a calculation using the initial state conditions given somehow by S I . More interesting than an inpracsis unaccessible wave function | A ( t ) i for the whole Universe would be a wavefunction for a part of the universe -say a few particles in the laboratory- and thenwe might imagine something like that when we have prepared a state | ψ ( t ) i at time t for some such subsystem of the Universe it should correspond to the state vector | A ( t ) i factorizing like | A ( t ) i = | ψ ( t ) i ⊗ | rest A ( t ) i . (37)However, this will in general not be quite true. Rather we must usually admit thatwhether we get a welldefined state | ψ ( t ) i for the particles in the laboratory also willcome to depend on | B ( t ) i and not only on | A ( t ) i .It is true that in order that our model shall not be immediately killed the S I -dependence in some era prior to our own -presumably the Big Bang times- weremuch more significant in choosing the right classical solution (and then thereby alsoapproximately the to be realized quantum initial state too) than the present andfuture eras. Thus in this approximation | A ( t ) i represents the development fromthe by the Big Bang times S I -contributions (supposed to be dominant) selectedinitial state untill time t . But although in this first approximation gives that | A ( t ) i should represent the whole development there are at least some observations thatmust depend strongly on | B ( t ) i also. This is the random results which after usualquantum mechanics -“measurements theory”- comes out only statistically predicted.If the | A ( t ) i state develops into a state in say equal probability of two eigenvalues forsome dynamical variable that | A ( t ) i ca n tell us which of the two values in realized.It can, however, in our formalism still depend on | B ( t ) i . §
6. Our interpretation assumption(s)
In order to see how | B ( t ) i comes in we have from last year our interpretationassumptions quantum mechanically:Let us express the interpretation of our model by giving the expression for theprobability for obtaining a set of dynamical variables to at a certain time t have thevalues inside a certain range I (a certain interval I ). The answer to each questionis what one would usually identify with the expectation value of the projectionoperator P projecting on the space spanned by the eigenspaces (in the Hilbert-space) corresponding to the eigenvalues in the range I . In usual theory you wouldwrite the probability for finding the state | ψ ( t ) i to give the dynamical variables in omplex Action, Prearrengement for Future and Higgs Broadening I would be P ( I ) = h ψ ( t ) |P| ψ ( t ) ih ψ ( t ) | ψ ( t ) i (38)where the denominator h ψ ( t ) | ψ ( t ) i is not needed if | ψ ( t ) i already normalized.But now supposed we also knew about some measurements being done later thanthe time t . Let us for simplicity imagine that one for some simple system -a particle-managed to measure a complete set of variables for it. Then one would know aquantum state in which this particle did end up. Say we call it | φ END i . Then wewould be tempted to say that now -with this end up knowledge- the probability forthe particle having at time t its dynamical variables in the range I would be P ( I ) = |h φ END ( t ) |P| ψ ( t ) | h φ END ( t ) | φ END ( t ) ih ψ ( t ) | ψ ( t ) i (39)where | φ END i ( t ) means the state develpped back (in time) to time t .But the question for which we here wrote a suggestive answer were presumablynot a good question because: One would usually require that if we ask for whetherthe variables are in the interval I then one should measure if they are there or not.Such a measurement will, however, typically interfere with the particle so that laterextrapolate back the end state -if one could at all find it- to time t i.e. find | φ END i ( t )sounds impossible.You might of course ignore the requirement of really measuring if the system(particle) at time t is interval I and just say that (39) could be true anyway; butthen it is not of much value to know P ( I ) from (39) if it is indeed untestable in thesituation. You might though ask if expression (39) could at least be taken to betrue in the cases where it were tested. There are some obvious consistency checksconnected it to measurable questions: You could at least sumover a complete set of | φ END i states and check that get the measurable (38) back.The from the measurement point of view not so meaningfull formula (39) hasin its Feynman integral form we could claim a little more beauty than the moremeaningfull (38) because we in (39) can say that we stick in the projection operator P just at the moment of time t , but basically use the full Feynman path integralotherwise from the starting to the final time: P ( I ) (cid:12)(cid:12)(cid:12) from (39) = (cid:12)(cid:12)(cid:12) h φ END | R e i ~ S ts,tf [ path ] P (cid:12)(cid:12)(cid:12) insert at t Dpath | ψ i (cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12) h φ END | R e i ~ S ts,tf [ path ] Dpath | ψ i (cid:12)(cid:12)(cid:12) . (40)Here the paths are meant to be paths defined on the time interval t s where one getthe starting state | ψ i = | ψ ( t s ) i to the end time at which one sees the final state forthe system | φ END i . One then uses in formulae (40) both the Feynman path integralto solve the Schr¨odinger equation to develop | ψ ( t s ) i forward and | φ END backwardin time.The reason that we discuss such difficult to associate with experiment formulas as(39) and the equivalent (40) is that it is this type of expression we postulated to be the6
H.B. Nielsen and M. Ninomiya starting interpretation of our model. In fact we postulated (40) but without puttingany boundary conditions | ψ ( t s ) and | φ END in and letting t s → −∞ and t f → + ∞ .It is of course natural in our model to avoid putting in boundary conditions, sinceas we have told repeatedly the imaginary action does the job instead. Withoutthese boundary conditions specified by | ψ i and | φ END or with boundary conditionssummed -as will make only little difference once we have S I - the formulas come tolook even more elegant: Our model is postulated to predict for the probability forthe variables at time t passing the range I to be P ( I ) = X BOUNDARIES (cid:12)(cid:12)(cid:12)(cid:12)Z e i ~ S [ path ] P (cid:12)(cid:12)(cid:12) at t Dpath (cid:12)(cid:12)(cid:12)(cid:12) X BOUNDARIES (cid:12)(cid:12)(cid:12)(cid:12)Z e i ~ S [ path ] Dpath (cid:12)(cid:12)(cid:12)(cid:12) . (41)As just said the summing over the boundary states BOUNDARIES is expanded tobe only of very little significance in as far as S I should make some boundaries somuch dominate that as soon as a bit of the dominant one is present in a randomboundary it shall take over.The expression (41) is practically the only sensible proposal for interpretinga model in which the Feynman path integral is postulated to be the fundamentalphysics. If we for instance think of I as a range of dynamical variables which areof the types used in describing the paths, then what else could we do to find thecontribution -to the probability or to whatever- than chopping out those paths whichat time t have path ( t ) ∈ I . But such a selection of those paths going at time t throughthe interval I corresponds of course exactly to inserting at time t the projectionoperator P corresponding to a subset of the variables used to describe the paths. Inquantum mechanics one always have to numerically square the “amplitude” whichis what you get at first from the Feynman path way integral, so there is really notmuch possibility for other interpretation than ours once has settled on extractingthe interpretation out of a Feynman path integ ral with paths describing thinkabledevelopments in configuration (say q ) space through all times.Once having settled on such an interpretation for the configuration space vari-ables -supposed here used in the Feynman path description- by formula (41) andhaving in mind that at least crudely | A ( t ) i is the wave function of the Universe it ishard to see that we could make any other transition to the postulate of the proba-bility for an interval I involving also conjugate momenta than simply to put in theprojection operator P anyway.The formula for the probability of passage of the range I , formula (41), for whichwe have argued now that it is the only sensible and natural one to get from Feynmanpath integral using all times is in terms of our | A ( t ) i and | B ( t ) i written as P ( I ) = X BOUNDARIES |h B ( t ) |P| A ( t ) i| X BOUNDARIES |h B ( t ) | A ( t ) i| . (42) omplex Action, Prearrengement for Future and Higgs Broadening §
7. How to derive quantum mechanics under normal conditions
This formula (42) although nice from the estetics of our Feynman-path-waybased model is terribly complicated if you would use it straight away.In order that our model should have a chance to be phenomenological viable itis absolutely needed that we can suggest a good approximation (scheme), in whichit leads to usual quantum mechanics with its measurement-“theory”, with the usualonly statistical predictions.It is easily seen from (42) that what we really need to obtain the usual -andmeasurementwise meaningfull- expression (38) is the approximation | B ( t ) ih B ( t ) | ∝ (43)where 1 is the unit operator in the Hilbert space.7.1. The argument for the usual statistics in quantum mechanics
This approximation (43) is, however, not so difficult to give a good argumentfor from the following assumption which are quite expected to be true in pracsis inour model:1) Although the S I -variations that gives rise to selection of the to be realizedsolutions to the classical equations of motion are supposed to be much smallerin future (i.e. later than t ) times than on the past side of t they are the only onesthat can take over the boundary effects on the future side and thus determine | B ( t ) i . However, especially since they are relatively weak S I future terms it maybe needed to look for enormously far futures to find the contributions at all.Thus one has to integrate the equations of motion δS R = 0 over enormouslylong times to get back from the “future” to time t with the knowledge of thestate | B ( t ) i which is the one favoured from the S I -contributions of the future.2) Next we assume that the equations of motion in this future era are effectivelysufficiently ergodic that under the huge time spans over which they are to beintegrated up the point at t in phase space corresponding to the by the future S I -contributions become approximately randomly distributed over the in pracsisuseful phase space.From these assumptions we then want to say that in classical approximation | B ( t ) i will be a wave packet for any point in the phase space with a phase space constantprobability density. That is how a snapshot of an ergodicly developping modellooks at a random time after or before the one its state were fixed. When we takethe weighted probabilities of all the possible values of | B ( t ) ih B ( t ) | we end up fromthis ergodicity argument that the density matrix to insert to replace | B ( t ) ih B ( t ) | isindeed proportional to the unit operator. I.e. we get indeed from the “ergodicity”the approximation (43).If we insert (43) into our postulated formula (42) we do indeed obtain P ( I ) = h A ( t ) |P| A ( t ) ih A ( t ) | A ( t ) i (44)8 H.B. Nielsen and M. Ninomiya which is (38) but with | A ( t ) i inserted for the initial wave function. Hereby wecould claim to have derived from assumptions or approximations the usual quantummechanics probability interpretation.That we can get this correspondance is of course crucial for the viability of ourmodel.Let us remark that since | B ( t ) ih B ( t ) | ∝ is only an approximation the proba-bility P ( I ) for the interval I being passed at time t depends in principle via | B ( t ) i on the future potential events to be avoided or favoured.The argumentation for | B ( t ) ih B ( t ) | being effectively proportional to unity bythe “ergodicity” is the same as the classical that even if there are some adjustmentsfor future they look for as random except in very special cases. §
8. Returning to Hamiltonian development now of | A ( t ) i It is obvious that one can use the non-hermitean Hamiltonian H derived formallyfrom the complex Lagrangian L = L R + iL I associated with our complex action S = S R + iS I to give the time development of | A ( t ) i i ddt | A ( t ) i = H | A ( t ) i . (45)The analogous development (Schr¨odinger) equation for | B ( t ) i only deviates by a signin the time and the Hermitean conjugation in going from ket to bra i ddt | B ( t ) i = H † | B ( t ) i (46)so that i ddt h B ( t ) | = −h B ( t ) | H (47)and we thus can get ddt h B ( t ) | A ( t ) i = 0 . (48)8.1. S-matrix-like expressions
Realistic S-matrix scattering only going on in a small part of the Universe so thatone should really imagine | A ( t ) i factorized into a Cartesian product like (37), but forsimplicity let us (first) take this splitting out of our presentation. That means thatwe simply assume that by some scientific argumentation has come to the conclusionthat one knows the | A ( t ) i state vector at the initial time t i for the experiment to be | A ( t i ) i = | i i . (49)Then the looking for the final state | f i at the somewhat later time t f may be rep-resented by looking if the paths followed pass through a state correspondig to theprojection operator P = | f ih f | . (50) omplex Action, Prearrengement for Future and Higgs Broadening P projecting on aninterval I of dynamical quantities discussed above since typically some variables aremeasured to be inside small ranges, which we could call I . Inserting (50) for P intoour fundamental postulate (42) we obtain P ( | f i ) = X BOUNDARIES |h B ( t f ) | f i| |h f | A ( t f ) i| X BOUNDARIES |h B ( t f ) | A ( t f ) i| . (51)By insertion of the time development of (49) expression | A ( t f ) i = U | A ( t i ) i = U | i i (52)where U is the time development operator from t i to t f , we get further (ignoring theunimportant sums over boundaries) P ( | f i ) = |h B ( t f ) | f i| |h f | U | i i| |h B ( t f ) | U | i i| . (53)If we allow ourselves to insert here the statistical approximation (43) we reduce thisto P ( | f i ) = |h f | U | i i| || U | i i|| (54)using the normalization of | f i already assumed (otherwise P = | f ih f | would not havebeen a (properly normalized) projection operator). Since we here already assumed | A ( t i ) i = | i i i.e. (49) this equation (54) is precisely the earlier (25). So our postulate(42) leads under use of the approximation (43) to the quite sensible equation (25).8.2. Talk about Feynman diagrams
Since our model contains the usual theory as the special case of zero imaginarypart it needs of course all the usual calculational tricks of the usual theory such asFeynman diagrams to evaluate the S-matrix U giving the time development from t i to t f .Since we argued above that the transition from the Feynman path integral tothe Hamiltonian formalism for the purposes of obtaining U or the time developmentof | A ( t ) i just can be performed by working with the complex coefficients in theLagrangian, it is not difficult to see that we can also develop the Feynman diagramsjust by inserting the complex couplings etc.One should, however, not forget that our formulae for tha transition probabilities(25) contains a nontrivial normalization denominator put in to guarantee that theprobability assigned to a complete set of states at time t f summed up be preciselyunity. This normalization denominator would be trivial in the usual case of a unitary U , but in our model it is important to include it, since otherwise we would not havetotal probability 1 everything that could happen at t f together.0 H.B. Nielsen and M. Ninomiya
In the usual theorem we have the optical theorem ensuring that the imaginarypart of the forward scattering amplitude Im T is so adjusted as to by interferingwith the unscattered beam to remove from the continuing unscattered beam justthe number of scattering particles as given by the total cross section σ tot . When wewith our model get a nonunitary U it will typically mean that this optical theoremrelation will not be fulfilled. For instance we might fill into the Feynman diagrame.g. a Higgs propagator with a complex mass square m H = m HR + im HI (55)so as to get prop = ip − m HR − im HI (56)for the propagator. That will typically lead to violation of the optical theorem.Now the ideal momentum eigenstates usually discussed with S-matrices is anidealization and it would be a bit more realistic to consider a beam of particlescomming with a wave packet state of a finite area A measured perpendicularly tothe beam direction. Suppose that the at first by summing up different possiblescatterings gives a formal cross section σ tot U while the imaginary part of the forwardelastic scattering amplitude would correspond via optical theorem to σ opt U . Thenthe probability for no scattering would if we did not normalize with the denominatorbe P no sc U = A − σ opt U A (57)while the formal total scattering probability would be P sc U = σ tot U A . (58)These two prenormalization probabilities would not add to unity but to P sec U + P no sec U = σ tot U − σ opt U A + 1 . (59)That is to say our prediction for scattering would be P sc = P sc U σ tot U − σ opt U A = σ tot U A + σ tot U − σ opt U ≃ σ tot U A (for A ≫ σ tot U , σ opt U ) (60)while the probability for no scattering would be P no sc = P no sc U σ tot U − σ opt U A omplex Action, Prearrengement for Future and Higgs Broadening
21= 1 − σ opt U σ tot U − σ opt U A = A − σ opt U A + σ tot U − σ opt U ≃ − σ tot U A (for A ≫ σ tot U , σ opt U ) . (61)From this we see that we would -as we have put into the moodel- see consistency oftotal number of scatterings and particles removed from the beam; but if we beginto investigate Coulomb scattering interfering with the imaginary part of the elasticscattering amplitude proportional to σ opt U then deviations from the usual theorymay pop up! §
9. Some discussion of the interpretation of our model
It is obviously of great importance for the viability of our model that the featuresof a solution decission for whether it is being realized dominantly lie in the era of“Big Bang” or at least in the past compared to our time. Otherwise we do nothave even approximately the rules of physics as we know them, especially the secondlaw of thermodynamics and the fact that we easily find big/macroscopic amountsof a spesial material (but we cannot get mixtures seperate time progresses withoutmaking use og other chemicals or free energy sources).A good hypothesis to arrange such a phenomenologically wished for result wouldbe that the different (possible) solutions -due to the physics of the early, the BigBang, era have a huge spread in the contribution S I BB era = R BB era L I dt from thisera. If the variation ∆S I BB era of S I BB era due to varying the solution in the BigBang era is very big compared to say the fluctuations ∆S I our era the contribution S I our era from our own era then the solution chosen to be realized will be dominantlyinfluenced from what happened at the Big Bang times rather than today. However,it may still be important whether:1) The realized solution is (essentially) the one with the smallest possible S I at allor2) There is such a huge number of solutions to δS R = 0 and such a huge increasein their number by allowing for somewhat bigger S I than the minimal S I onegets so many times more solutions that it overcompensates for the probability(density) factor e − S I .In the case 1) of the just the minimal S I solution being realized the importance ofthe S I -contributions from the non dominant eras can be almost completely competedout. In the case 2) in which still at first the realized solution is randomly chosenamong a very large number of solution it seems unavoidable that among two possibletrajectories deviating by a contribution to its S I by some amount of order unity fromtodays era -say even a by us understandable contribution of order unity- the one withthe smaller (i.e. mere negative) S I will be appreciably more likely than the other one.This case 2) situation seems to lead to effects that would be very difficult to have gotso hidden that nobody had stopped them untill today. At least when working with2 H.B. Nielsen and M. Ninomiya relativistic particles we would expect big effects in possibility 2) Scattering angledistributions in relativistic scattering processes could be significantly influenced byhow long the scattered particle would be allowed to keeps its relativistic velocityafter the scattering. If the scattered particle were allowed to escape to outer spacewe would expect a strongly deformed scattering angle spectrum whereas a rapidstopping of the scattered particles would diminish the deformation of the angulardistribution relative to that of the usual real action theory. So it is really much moreattractive with the possibility 1) that it is the minimal S I solution which is realized.In that case 2) it would also be quite natural that the contribution from the futureto S I say S I future could be quite dominant compared to those being so near in timeto today that we have the sufficient knowledge about them to be able to observe anyeffects.If there is only of order one or simply just one minimal S I solution it could beunderstandable that it were practically fixed by the very strong contributions in theBig Bang era and from random or complicated to evaluate contributions from a veryfar reaching future era, so that the near to today contributions to S I would be quiteunimportant.It should be obvious that in the case 1) the effects of practically accessible S I -contributions depending on quantities measured in an actual experiment will bedominated out so that they will be strongly suppressed, they will drown in for usto see random contributions from the future or the more organized contributionsfrom Big Bang time determining the initial state. In the classical approximationthe Big Bang initial time contributions to S I may be all dominant, but quantummechanically we have typically a prepared state | i i which will be given by the BigBang era S I -contributions while the measurement of a final state in principle couldbe more sensitive to the future S I -contributions.Now, however, under the possibility 1) of the single solution being picked withthe totally minimal S I the state | B ( t ) i will most likely be dominantly determinedfrom the far future and will have compared to that very little dependence on the S I -contributions of the near future (a rather short time relative to the far future). Bythe argument that the real part S R courses there to be a complicated developmentthrough time -which we take to be ergodic- one thus obtains | B ( t ) i up to an overallscale becomes a random state selected with same probability among all states in theHilbert space. In this case 1) situation we thus derive rather convincingly in theergodicity-approximation that we can indeed approximate | B ( t ) ih B ( t ) |h B ( t ) | B ( t ) i ∼ . (62)Really it is better to think forward to a moment of time say t erg which is on theone hand early enough that we can use the “ergodicity-approximation” and on theother hand late enough that the L I ’s later are in pracsis zero over the time scales ofthe experiment so that H from t erg on can be counted practically Hermitean. Thisshould mean that at that time t erg the system has fallen back to usual states inwhich L I is trivial. Then at t = t erg we simply have (43) and we have it for t later omplex Action, Prearrengement for Future and Higgs Broadening t erg i.e. in pracsis | B ( t ) ih B ( t ) | ∝ . (63)for t ≥ t erg . Then even this approximation (43) is selfconsistent for the later than t erg times. This selfconsistency of having (43) at one moment of time is not true overtime time intervals over which we do not effectively have a Hermitean Hamiltonian H .9.1. Development to final S-matrix
We may now develop formula (53) by using instead of | B ( t f ) i a B -state fora time later than t erg or we simply use | B ( t erg ) i just at the time t erg . Then thetransition probability from | i i to | f i , i.e. (53) becomes P ( | f i ) = (cid:12)(cid:12) h B ( t erg ) | U t f → t erg | f i (cid:12)(cid:12) · |h f | U | i i| (cid:12)(cid:12) h B ( t erg ) | U t f → t erg U | i i (cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) U t f → t erg | f i (cid:12)(cid:12)(cid:12)(cid:12) · |h f | U | i i| (cid:12)(cid:12)(cid:12)(cid:12) U t f → t erg | i i (cid:12)(cid:12)(cid:12)(cid:12) (64)where we have defined the time development operator U t f → t erg performing with thenonhermitean Hamiltonian H the development from t f to the even later time t erg .We also defined the analogous development operator from the initial time t i of theexperiment to the time t erg from which we practically can ignore L I to be called U t i → t erg . So we have since really analogously U = U t i → t f , that U t i → t erg = U t f → t erg U. (65)We could now simply (64) by introducing the states | f i and | i i refered by timepropagation to the time t erg by defining | f i erg = U t f → t erg | f i ; | i i erg = U t i → t erg | i i = U t f → t erg U | i i . (66)In these terms we obtain the | i i to | f i transition probability (64) developped to P ( | f i ) = (cid:12)(cid:12)(cid:12)(cid:12) | f i erg (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12) h f | erg ( U − t f → t erg ) † U − t f → t erg | i i erg (cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) | i i erg (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) | f i erg (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12) h f | f | i i f (cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) | i i erg (cid:12)(cid:12)(cid:12)(cid:12) = f (cid:12)(cid:12) h f | f | i i f (cid:12)(cid:12) i , (67)4 H.B. Nielsen and M. Ninomiya we have defined f = (cid:12)(cid:12)(cid:12)(cid:12) | f i eng (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) | f i f (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) | f i eng (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) U − t f → t eng | f i eng (cid:12)(cid:12)(cid:12)(cid:12) (68)and i = (cid:12)(cid:12)(cid:12)(cid:12) | i i eng (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) | i i f (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) | i i eng (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) U − t f → t eng | i i eng (cid:12)(cid:12)(cid:12)(cid:12) (69)Obviously we would for normalized | i j i f and | f k i f basis vectors of respectively a setinvolving | i i f and | f i f have that the matrix h f j | f | i k i f is unitary.This means that our expression (67) so far got of the form of a unitary -almostnormal- S-matrix modified by means of the extra factor i and f depending only onthe initial and final states respectively (68),(69). §
10. Expected effects of performing the measurement
Normally in quantum mechanics the very measurement process seems to play asignificant role.Here we shall argue that depending upon whether our model is working in thecase 1) of the solution path with the absolutely minimal S I or in the case 2) of therebeing so many more solutions with a somewhat higher S I that statistically one ofthese not truly minimal S I solutions become the realized one the performance of themeasurement process come to play a role in our model. Indeed we shall argue thatthe first derived formulas such as (67) e.g. are only true including the measurementprocess in the case 2) of a not truly minimal S I solution being realized. In the case 1)we shall however see that almost all effects of our imaginary part model disappears.10.1. The extra and random contribution to S I depending on the measurement result We want to argue that we may take it that we obtain an extra contribution to S I effectively depending on the measurement result. This contribution we can taketo be random but with expectation value zero, so that we think of a pure fluctuation.In order to argue for such a fluctuating contribution let us remark that a veryimportant feature of a quantum mechanical measurement is that it is associated withan enhancement mechanism. That could e.g. be the crystalization of some materialcaused by a tiny bit of light -a photon- or some other single particle. A typicalexample could be the bubble in the bubble chamber also. Again a single particlepassing through or a single electron excited by it causes a bubble containing a hugenumber of particles to form out of the overheated fluid. We can call such processesenhancements because they have a little effect cause a much bigger effect and thateven a big effect of a regular type. We can give a good description of the bubble insimple words, it is not only as the butterfly in the “butterfly effect” which also in thelong run can cause big effects. The latter ones are practically almost uncalculable,while the bubble formation caused by the particle in the bubble chambe r is so wellunderstood that we use it to effectively “see” the particles. omplex Action, Prearrengement for Future and Higgs Broadening O , say, it measures that theseregular or systematic enhancement effects reflect the value of O found. Otherwiseit would not be a measurement of O . The information of the then not measuredconjugate variable of O is not in the same way regularly or systematically enhanced.So it is the O -value rather than the value of its canonical conjugate variable Π O which gets enhanced.Thus the consequencies for the future of the measurement time t measurement andthus for the S I -contribution S I fut. mes. = Z ∞ t measurement L I dt (70)also depend on O (rather than on Π O ). It is this contribution -which is of courseat the end in general very hard to compute- that we want to consider as practi-cally random numbers depending on the O -value measurement (in the measurementconsidered). We must imagine that the various consequencies of the measurementresult of measurement of O becomes at least a macroscopic signal in the electronicsor the brain of the experimentor, but even very likely somehow come to influence apublication about that experiment. So in turn it influences history of science, historyof humans, and at the end all of nature. That cannot avoid meaning that we havea rather big O -dependent contribution S If ( O ′ ) to the integral (70), where O ′ is themeasure value of O .10.2. Significance of case 1) versus case 2)
We may a priori expect that such O -dependent contributions S If ( O ′ ) beingintegrals of the whole future of essentially macroscopic contributions will be muchbigger than the contribution to S I comming from the very quantum experimentduring the relatively much smaller time span t f − t i during which, say, the scatteringsor the like takes place. The only exception would seem to be if there were someparameter -the imaginary part m HI of the Higgs mass square, say- which were muchlarger in order of magnitude than the usual contributions to S I and especially tothe S If ( O ′ )’s. Baring such an enormous contribution during the short period t i to t f it is then S If ( O ′ ) that dominates over the S I -contributions from the “scatteringtime”.10.3. Case 2): A solution among many
If our model is working in the case 2) of it being not the very minimal S I solutionthat get realized but rather a solution belonging to a much more copiously popu-lated range in S I that gets realized we should really not call our O -value dependentcontribution just S If ( O ′ ), but rather S If ( O ′ , sol ). By doing that we should namelyemphasize that we obtain (in general) completely different contributions dependingon the O -value depending on which one solution among the many solutions getsrealized.In this case 2) it is in fact rather the average over the many possible solutionsof e − S I evaluated under the restriction of the various O -values being realized thatgives the probabilities for these various O -values O ′ . Since we would, however, at6 H.B. Nielsen and M. Ninomiya least assume as our ansatz that the S If ( O ′ ) distributions are the same for the various O -values the relative probabilities of the various measurement results O ′ or O ′′ forthe quantity O will not be much influenced by S If ( O ′ , sol ) contributions. Thus therelatively small contribution from the short t i to t f time of just a few particles mayhave a chance to make themselves felt and formulas derived with such effects areexpected to be o.k. in this case 2).10.4. Case 2): Absolute minimal S I solution realized In the case 1) on the other hand of only one absolutely minimal S I solutionbeing realized we do not have to worry so much about the dependence on the manydifferent solutions with a given O -value O ′ because there is only the one with theminimal S I which has a chance at all. In this case 1) we could imagine that S If ( O ′ )should be defined as corresponding the (classical) solution going through O = O ′ andhaving among that class of solutions the minimal S I . Since the contribution S If ( O ′ )is still much bigger than the S I -contribution from scattering experiment say (i.e.from the t i to t f experiment considered) we see that in this case 1) the contributionsfrom the experiment time proper has no chance to come through. The effects willalmost completely drown in the effectively random S If ( O ′ ) contribution.That should make it exceedingly difficult to see any effects at all of our imaginaryaction S I model in this case 1) That makes our case 1) very attractive phenomeno-logically, because after all no effects of influence from the future has been observedat all so far. §
11. How to correct our formulas for case 1)?
The simplest way to argue for the formalism for the case 1) of a single minimal-the lowed S I one- solution being realized is to use the classical approximation inspite of the fact that we are truly wanting to consider quantum experiments: If youimagine almost the whole way a classical solution which has the measured value O being O ′ and in addition having the minimal S I among all the solutions with thisproperty solution which has the measured value O being O ′ and in addition havingthe minimal S I among all the solutions with this property and call the S I for thatsolution S I min ( O ′ ) then the realized solution under assumption of our case 1) shouldbe that for the O ′ which gives the minimal S I min ( O ′ ).Now the basic argumentation for the S I -contribution S I during exp comming fromthe particles in the period in which they are considered scattering particles and de-scribed by an S-matrix U being unimportant goes like this: Imagine that we considertwo different calculations, one a) in which we just formally switched these “duringS-matrix” contributions S I during exp off, and one b) in which these contributions S I during exp are included. If the differences between the various S I min ( O ′ ) (for thedifferent eigenvalues of the measured quantity O , called O ′ ) deviate by amounts ofimaginary action much bigger than the contribution S I during exp i.e. if S I during exp ≪ S I min ( O ′ ) − S I min ( O ′′ ) (71)typically, then the switching on of the S I during exp -contribution, i.e. going from a) to omplex Action, Prearrengement for Future and Higgs Broadening S I min ( O ′ )imaginary action values will be the minimal one. Thus under the assumption (71)the switching on or off of S I during exp makes only little difference for the measurementresult O ′ and we can as well use the calculation a) with the S I during exp switched off.But that means that provided (71) we can in the S-matrix formulas totally leave outthe imaginary action.This is of course a very important result which as we stressed already onlyapply in the case 1) that the single minimal S I classical solution with the absolutelyminimal S I is the one chosen. Really it means that there must not be so many withhigher (but perhaps numerically smaller negative) S I -values that the better chanceof one of these higher S I ones can compensate for the weight factor e − S I .11.1. A further caveat
There is a slightlydifferent way in which the hypothesis -case 1)- if only the singleclassical solution with minimal S I being realized may be violated:There might occur a quantum experiment with a significant interference betweentwo different possible classical solutions. One might for instance think of the famousdouble slit experiment discussed so much between Bohr and Einstein in which aparticle seems to have passed through two slits, without anybody being able to knowwhich without converting the experiment into a different one. In order to reproducethe correct interferences it is crucial that there are more than one involved classicalpath. This means that for such interference experiments the hypothesis of case 1)of only one classical path being realized is formally logically violated. It is, however,in a slightly different way than what we described as the case 2) Even if we have tofirst approximation case 1) in the sense that all over except for a few short intervalsin time there were only one single classical solution selected, namely the one withminimal S I , an even not so terrible big S I during exp contribution being different forsay the passage through the two different slits in the double slit experiment couldcause suppression of the quantum amplitude for one of the slits relative to that forthe other one. Such a suppression would of course disturb the interference patternand thus cause an in general observable effect. The reason that our argument forno observable effects of S I during exp in case 1) does not work in the case here of twointerfering classical solutions is that both of them ends up with the same measuredvalue O ′ of O . Then of course it does not make any difference if the typical difference S I min ( O ′ ) − S I min ( O ′′ ) is big or not. Well, it may be more correct to say that thedifference in S I for two interfering paths in say a double slit is only of the orderof S I during exp and we must include in our calculation both paths if there shall beinterference at all. Ever so big S I -contributions later in future cannot distinguishand choose the one path relative to another one interfering with it. If there wereeffects of which of the interfering paths had been chosen in the future -so that theycould give future S I -contributions- it would be like the measurements of which paththat are precisely impossible without spoiling the experiments with its interference.8 H.B. Nielsen and M. Ninomiya
Conclusion about our general suppression of S I -effects The above discussion means that under the hypothesis that apart from pathsinterfering the realized classical solution or more precisely the bunch of effectivelyrealized interfering classical solutions is uniquely the bunch with minimal S I , es-sentially case 1), we cannot observe S I during exp , except through the modification itcauses in the interference.Apart from this disturbance of interference patterns the effect of S I concernsthe very selection of the classical path realized, but because of our accessible time topractically observe corrections is so short compared to future and past such effectsare practically negligible, except for how they might have governed cosmology.On top of these suppression of our S I -effects to only occur for interference orfor cosmologically important decissions we have that massless or conserved nonrela-tivistic particles course (further) suppression.So you would actually have the best hopes for seeing S I -effects in interferencebetween massive particles on paths deviating with relativistic speeds or using non-conserved particles such as the Higgs particle.Actually we shall argue that using the Higgs particles is likely to be especiallypromissing for observing effects of the imaginary part of the action. It is not onlythat we here have a particle with mass different from zero which is not conserved, butalso that it well could be that the imaginary part of the Higgs mass square m I wereexceptionally big compared to other contribution at accessible energy scales. Thepoint is that it is related to the well known hierarchy problem that the Higgs realmass square term is surprisingly small. Privided no similar “theoretical surprise”makes also the imaginary part of the Higgs mass square small, size of m I would fromthe experimental scales point of view be tremendously big.One of the most promissing places to look for imaginary action effects is indeedsuggested to be where one could have interence between paths with Higgs particlesexisting over different time intervals. This sort of interference is exactly what is ob-served if one measures a Higgs particle Breit-Wigner mass distribution by measuringthe mass of actual Higgs particle decay products sufficiently accurately to evaluatethe shape of the peak. In the rest of the present article we shall indeed study whatour imaginary action model is likely to suggest as modification of the in usual mod-els expected Breit-Wigner Higgs mass distribution. We shall indeed suggest that inour model there will likely be a significantly broader Higgs mass distribution thanexpected in conventional models. In fact we at the end argue for a Higgs massdistribution essentially of the shape of the square root of the usual Breit-Wignerdistribution. It means it will fall off like | M − M Higgs | for large M − M Higgs ratherthan as | M − M Higgs | as in the usual theory. §
12. Higgs broadening
It is intuitively suggestive that if Higgs particles so disfavours a solution to berealized that Higgs production is supressed if needed by almost miraculous eventsthen we would also not expect the Higgs to live the from usual physics expected omplex Action, Prearrengement for Future and Higgs Broadening ∝ e − Γ t for t ≥ t HIGGS CREATION (72)(let us put t HIGGS CREATION =0) would be modified simply by being multiplied bythe square root of the probability supression factor e − m HI mHR t so that we end up withthe total decay form of the amplitude -for the Higgs still being there- e „ − Γ − m HI mHR « t (73)This formula seemingly representing an effective decay rate of the Higgs can,however, hardly be true, because the Higgs can only decay -even effectively- once itis produced provided it has something to decay to and decays sufficiently strongly.The correction for this must come in via a normalization taking care of that oncewe got the Higgs produced in spite of its “destination” of bring long and know thatthen we have to imagine that since it happened the S I contribution from the pastwere presumably relatively small so as to compensate for the effect of the Higgslong life. If really we are allowed to think about a specific decay moment for theHiggs, then we should presume that the extra contribution m HI m HR τ to S I from a Higgsliving the eigentime τ would if it were known to live so would have been cancelledby contributions from before or after. Thus at the end it seems that the whole effectis cancelled if we somehow get knowing how long the Higgs lives. Now, however, thetypical situation is that even if by some coproduction we may know that a Higgswere born, then it will decay usually so that during the decay process it will be in asuperposition of having decayed and having not yet decayed. One would then thinkthat provided we keep the state normalized -as we actually have to since in our modelthere is probability unity for having a future- it is only when there is a significantprobability for both that the Higgs is still there and that it is already decayed in thewave function there is a possibility for the imaginary Lagrangian L I contribution tomake itself felt by increasing the probability for the decay having taken place.A very crude estimate would say that if we denote the width Γ SI = m HI m HR and takeit that Γ SI ≫ Γ b ¯ b (the main decay say it were b ¯ b ) corresponding to the probabilitydecay then we need that the probability for the decay into say a main mode b ¯ b has already taken place to be of the order ΓΓ SI if we shall have of order unity finaldisappearance of the Higgs particle.That might be the true formula to Fourier transform to obtain the Higgs widthbroadening if it were not for the influence from the future also built into our model.In spite of the fact that a short life for a certain Higgs of course would -providedas we assume m HI >
0- contribute to make bigger the likelyhood of a solutionwith such a short Higgs life, it also influences what goes on in the future relativeto the Higgs decay. In this comming time the exact value of the Higgs lifetime in0
H.B. Nielsen and M. Ninomiya question will typically have very complicated and untransparent but also very likelybig effects on what will go on. Thus if there are just some S I -contribution in thisfuture relative to the Higgs decay the total S I -contribution may no longer at allbe a nice linear function of the eigenlife time − τm HI m HR but likely a very complicatedstrongly oscillating function. Now these contributions comming -as we could say-from the “butterfly wing effect” of the H iggs lifetime for the Higgs in question couldeasily be very much bigger than the contribution from the Higgs living only of theorder of ∼ − s say, since the the presumed yet to exist time of the Universecould e.g. be of the order of tens of millions (i.e. ∼ ) years. Even if for thereason of the imaginary part of the Higgs mass square not being by the solution tohierarchy problem mechanism suppressed like the real one would be say 10 timesbigger than the real part, it is not immediately safe whether it could compete withthe contribution from the long times. At least unless the imaginary part m HI of thecoefficient in the Higgs mass term in the Lagrangian density is compared to the realpart abnormously large the contributions to S I from the much longer future thanthe Higgs lifetime order of magnitude will contribute quite dominantly compared tothe contribution from the short Higgs existence to S I . Under the dominance of thefuture contribution the S I favoured Higgs lifetime could easily be shifted around ina way we would consider random. In fact the future S I -contribution will typicallydepend on some combination of the Higgs lifetime τ and its conjugate variable itsrest energy m . Thereby the dominant S I -contribution could easily be obtained forthere being a lifetime wave function ψ life ( t ) which has a distribution in the lifetime t life .We may estimate the effective Higgs width after the broadening effect in a coupleof different ways:12.1. Thinking of a time development
In the first estimate we think of a Higgs being produced at some time in therest frame τ = 0 say. Now as time goes on -in the beginning- there grows in theusual picture, and also in ours in fact an amplitude for this initial Higgs havingindeed decayed into say | b ¯ b i a state describing a b and an anti b -quark ¯ b having beenproduced. The latter should in the beginning come with a probability h b ¯ b | b ¯ b i ∝ τ Γ USUAL where τ is the here assumed small eigentime passed since the originatingHiggs were produced and Γ USUAL is the in the usual way calculated width of theHiggs particle (imagined here just for pedagogical simplicity to be to the b ¯ b -channel).In the full amplitude/state vector for the Higgs or decay product system aftertime τ we have the two terms | f ull i = |Hi + | b ¯ b i = α |Hi norm + β | b ¯ b i norm (74)where |Hi norm is a to unit norm, hH| norm |Hi norm = 1 normalized Higgs particlewhile | b ¯ b i norm is the also normalized appropriate b ¯ b state. The symbols |Hi = α |Hi norm (75) omplex Action, Prearrengement for Future and Higgs Broadening | b ¯ b i = β | b ¯ b i norm (76)on the other hand stands for the two parts of the full amplitude or state (74).Now if we take it that -by far- the most important part of the imaginary partof the Lagrangian i.e. L I is the Higgs mass square term (remember the argumentthat a solution to the large weak to Planck scale ratio could easily solve this problemfor the real part of the mass square m HR while leaving the imaginary part untunedand thus large from say L.H.C. physics scale) then as τ goes on the probability forthe |Hi part of the amplitude surviving should for the S I -effect reason fall downexponentially. That is to say that we at first would say that the amplitude squarefor this survival | α ( τ ) | should fall off exponentially like | α ( τ ) | ∝ exp (cid:16) − S I (cid:12)(cid:12) Higgs (cid:17) ≃ exp (cid:16) − L I (cid:12)(cid:12) Higgs τ (cid:17) ≃ exp (cid:18) − | m HI | m HR τ (cid:19) (77)(for the assumed “habed” Higgs spin). This can, however, not be quite so simplesince we must the normalization conserved to unity meaning | α ( t ) | + | β ( t ) | = 1 . (78)This equation has to be uphold by an overall normalization. In the situation in thebiginning, τ small, and assuming that the “Imaginary action width” Γ SI ˆ=2 (cid:12)(cid:12)(cid:12) L I (cid:12)(cid:12) Higgs (cid:12)(cid:12)(cid:12) = | m HI || m HR | ≫ Γ USUAL (79)we at first would expect | β ( t ) | ∝ Γ USUAL τ | α ( t ) | ∝ exp( − Γ SI τ ) , (80)but then we must rescale the normalization to ensure (78). Then rather | α ( t ) | ≃ exp( − Γ SI τ )exp( − Γ SI τ ) + Γ USUAL τ | β ( t ) | ≃ Γ USUAL τ exp( − Γ SI τ ) + Γ USUAL τ . (81)Inspection of these equations immediately reveals that there is no essential decayaway of the (genuine) Higgs particle hH|Hi = | α ( τ ) | (82)2 H.B. Nielsen and M. Ninomiya before the two terms in the normalization denominatorexp( − Γ SI τ ) + Γ USUAL τ (83)reach to become of the same order. In the very beginnig of course the term exp( − Γ SI τ ) ∼ τ small Γ USUAL τ . But once this situation of the twoterms being comparable the Higgs particle essentially begins to decay. If we thereforewant to estimate a very crude effective Higgs decay time in our model τ eff = 1 Γ eff (84)this effective lifetime τ eff must be crudely given by Γ USUAL τ eff ≃ exp( Γ SI τ eff ) . (85)From this equation (85) we then deduce after first defining X ˆ= Γ SI τ eff or τ eff ˆ= XΓ SI (86)that e − X = Γ USUAL Γ SI X (87)and thus ignoring the essential double logarithm log X that X ≃ log Γ SI Γ USUAL . (88)Inserting this equation (88) into the definition (86) of X we finally obtain τ eff = 1 Γ SI X = 1 Γ SI log Γ SI Γ USUAL (89)or Γ eff = Γ SI log Γ SI Γ USUAL . (90)Thus the effective width Γ eff of the Higgs which we expect from our model to beeffectively seen in the experiments when the Higgs will be or were found (in L.E.P.we actually think it were already found with the mass 115 GeV) we expect to begiven by (90).Of course we do not really know m HI and thus Γ SI = (cid:12)(cid:12)(cid:12) m HI m HR (cid:12)(cid:12)(cid:12) to insert into (90),but we may wonder if in the case that Γ SI were much longer than the Higgs massreally should replace it by this Higgs mass instead? The reason is that it sounds abit ccrazy to expect an energy distribution in a resonance peak to extend essentiallyinto negative energy for the produced particle as a width broader than the masswould correspond to. omplex Action, Prearrengement for Future and Higgs Broadening Γ SI wereeven more big- to take Γ SI ∼ m HR . We might also suggest the speculation in theconjugate way of saying: We can hardly in quantum mechanics imagine a start forthe existence of a Higgs particle to be so well defined that the energy of this Higgsparticle if using Heisenberg becomes so uncertain that it has a big chance of beingnegative.Also if m HI , the imaginary part by some hierarchy problem related mechanismwere tuned to the same order of magnitude as m HR we would get (in pracsis) Γ SI ∼ m HR . With this suggestion inserted our formula (90) gets rewritten into Γ eff ≃ m HR log m HR Γ USUAL (91)wherein we for orientation may insert the L.E.P. uncertain finding | m HR ≃ | GeVand a usual decay rate for such a very light Higgs of order of magnitude Γ USUAL ≃ − GeV. This would give Γ eff ≃ ≃ . ≃ . (92)This is just a broadening of the Higgs width of the order of magnitude which wasextracted from the L.E.P. data to support of the theory of the Higgs mixing withKaluza-Klein type models. What seems to be in the data is that were statisticallymore Higgs candidates even below the now efficial lower bound for the mass 144GeV,and that there has at times been even some findings below with insufficient statistics.The suggestion of the present article of course is that these few events were due to“broadening” of the Higgs simply indeed Higgs particles. There were even an eventwith several GeV higher mass than the “peak” at 115GeV, but the kinematics atL.E.P. were so that there were hardly possibilities for higher mass candidates.12.2. Method with fluctuating start for Higgs particle
The second method -which we also can use to obtain an estimate of the in ourmodel expected Higgs peak shape- considers it that the Higgs particle is not necessar-ily created effectively in a fixed time state, but rather in some (linear) combinationthe energy and the start moment time.In pracsis the experimentalist neither measures the start moment time nor theenergy or mass at the start of the Higgs particle life very accurately compared tothe scales needed.Let us first consider the two extreme cases:1) The Higgs were started (created) with a completely well defined energy (say insome coproduction with the energy of everything else in addition to the beamshaving measured energies).2) The moment of creation were measured accurately.Then in both cases we consider it that we measure the energy or mass rather of thedecay products, say γ + γ or b + ¯ b accurately.4 H.B. Nielsen and M. Ninomiya
Now the question is what amount of Higgs-broadening we are expected to see inthese two cases:1) Since we simplify to think of only the one degree of freedom -the distance ofthe say b + ¯ b from each other- the determination of the energy (or mass) inthe initial state means fully fixing the system and the energy at the end iscompletely guaranteed. So by the initial energy the final is also determinedand must occur with probability unity under the preassumption of the initialone. It does not mean, however, as is well known from usual real action theorythat there is no Breit-Wigner peak. It is only that if one measures the massor energy twise, then one shall get the same result both ways: decay productmass versus production mass.However, concerning our potential modification by the S I -effect it gets totallynormalized away in this case 1) of the energy or mass being doubly measured.That should mean that there should be no “Higgs-broadening” when one mea-sures the mass doubly i.e. both before and after the existence time of theHiggs.The reason for this cancellation is that with the measurement of the correctenergy once more the matrix element squared ratio can be complemented byadding similar terms with the now energy/mass eigenstate | f i replaced by theother non-achievable -by energy contribution- energy eigenstate. Because ofthe energy eigenstates other than the measured one | f i cannot occur the hereproposed to be added terms are of course zero and it is o.k. to add them. Afterthis addition and using that the sum X E | E ih E | = 1 (93)over the complete set of energy eigenstates is of course the unit operator 1 wesee that numerator and denominator becomes the same (and we are left withonly the IFFF factor, which we ignore by putting it to unity). Thus we get inthis case of initial energy fixation no S I -effect.2) In the opposite case of a prepared starting time corresponding to the in pracsisunachievable measurement of precisely when the Higgs got created we wouldfind in the usual case the energy i.e. mass (in Higgs c.m.s.) distributionby Fourier transforming the exponential decay amplitude ∝ exp (cid:16) − Γ USUAL τ (cid:17) .Now, however, one must take into account that this amplitude is further sup-pressed the larger the Higgs existence time τ due to the theory caused extraterm in the imaginary part of the action S I S I (cid:12)(cid:12)(cid:12) F ROM HIGGS LIF E = Γ SI τ. (94)This acts as an increased rate of decay of the Higgs particle so as if it had thetotal edcay rate Γ SI + Γ USUAL . If Γ SI ≫ Γ USUAL that of course means a muchbroader Higgs Breit-Wigner peak.Presumably though we should to avoid the problem with negative energies ofthe Higgs particle only take a total width up to of the order of the Higgs omplex Action, Prearrengement for Future and Higgs Broadening m HR seriously. So as under 1) we suggest to in pracsis just put Γ SI ∼ m HR ∼ S I . Such a fixation by S I in past or in future becomes effectively a preparation of the Hi ggs state produced.Now it is, however, not under our control -in the case we did not ourselves measure-what was measured the start time or the start energy or some combination?Most chance there is of course for it being some combination of energy (i.e. actualmass) and the start time which is getting “measured” effectively by our S I -effects.Especially concerning the part of this “measurement” that is being determined fromthe future S I -contribution the “measurement” finally being done by the S I at a verylate time t it will have been canonically transformed around, corresponding to timedevelopments over huge time intervals. Such enormous transformations canonicaltransformations in going from what S I effectively depends on to what becomes theinitial preparation setting of the Higgs initial state in question means that the latterwill have smeared out by huge canonical transformations. We shall take the effect ofthe huge or many canonical transformations which we are forced to consider randomto imply that the probability distribution for the combinations of starting time andenergy that were effectively “measured” or prepared by the S I -effects should beinvariant under canonical transformations. Such a canonical transformation invariantdistribution of the combination quantity to be taken as measured seems anyway avery natural assumption to make. Our arguments about the very many successivecanonical transformations needed to connect the times at which the important L I -contributions come to the time of the Higgs state being delivered were just to supportthis in any case very natural hypothesis of a canonically invariant distribution of thecombination which say linearized would be aH Higgs + bt start . (95)Here a and b are the coefficients specifying the combination that were effectively bythe S I -effects “measured” or prepared for the Higgs in the start of its existence. Weare allowed to consider the starting time as a dynamical variable instead of a timebecause it can be transformed to being essentially the geometrical distance betweenthe decay products b + ¯ b say. Then it is (essentially) the conjugate variable to theactual mass or energy in the rest frame of the Higgs called here H Higgs .It is not difficult to see that under canonical transformations we can scale H Higgs H.B. Nielsen and M. Ninomiya and t start oppositely by the same factor: H Higgs → λH Higgs ,t start → λ − t start . (96)Thus the corresponding transformation of the coefficients a and b is also a scaling inopposite directions a → λ − a,b → λb. (97)We can if we wish normalize to say ab = 1. The distribution invariant under thecanonical transformation will now be a distribution flat in the logarithm of a or of b , say dP ≃ d log a. (98)In pracsis it will turn out that the experimentalist has made some extremely crudemeasurements of both there are some cut offs making it irrelevant that the canoni-cally invariant distribution (98) is not formally normalizable. With a distribution ofthis d log a form it is suggested that very crudely we shall get a geometrical averageof the results of the two end points possible 1) and 2) above. This means that we infirst approximation suggest a resulting replacement for the usual theory Breit-Wignerpeak formula being the geometrical mean of the two Breit-Wigners correspondingto the two above discussed extreme case 1) energy prepared: Breit-Wigner with Γ SI + Γ USUAL and 2) t start prepared: Γ USUAL only.In all circumstances we must normalize the peak in order that the principle ofjust one future which is even realized in our model is followed.That is to say that the replacement for the Breit-Wigner in our model becomescrudely D BW OUR MODEL ( E ) = N q D BW Γ
USUAL + Γ SI ( E ) D BW Γ
USUAL ( E ) . (99)If we assume the Γ SI large we may take roughly the broad Breit-Wigner D BW Γ
USUAL + Γ SI ( E )to be roughly a constant as a function of the actual Higgs rest energy E . In thiscase we get simply D BW OUR MODEL ( E ) ≃ ˆ N q D BW Γ
USUAL ( E ) (100)where ˆ N is just a new normalization instead of the normalization constant N inforegoing formula.The crux of the matter is that we argue for that our model modifies the usualBreit-Wigner by taking the square root of it, and then normalize it again. The totalnumber of Higgs produced should be (about) the same as usual. But our modelpredicts a more broad distribution behaving like the square root of the usual one. omplex Action, Prearrengement for Future and Higgs Broadening Please look for this broadening
This square rooted Breit-Wigner is something it should be highly possible tolook for experimentally in any Higgs producing collider and according to the abovementioned bad statistics data from L.E.P. it may already be claimed to have beenweakly seen, but of course since the seeing of the Higgs itself were very doubtful atL.E.P. at 115GeV the broadening is even less statistically supported but it certainlylooks promissing.The tail behaviour of a square rooted Breit-Wigner falls off likeconst. | E − M H | (101)rather than the in usual theory faster fall offconst. | E − M H | (102)where M H is the Higgs (resonance) mass and E is the actual decay rest systemenergy.Let us notice that the integral of the tail in our model (broadened Higgs) leadsto a logarithmic dependence Z | E − M H | dE ≃ log | E − M H | . (103)If for instance we put the Γ USUAL ∼ | E − M H | forlarge values at ∼ log = 35 = 0 . . M H . This | E − M H | probabilitydistribution should be especially nice to look for because it would so to speak showup at all scales of accuracy of measuring the actual mass of Higgses produced. Sothere should really be good chances for looking for our broadening as soon as onegets any Higgs data at all.Let us also remark that the distribution integrating to a logarithm of | E − M H | obtained by this our second method is in reality very little different from the resultobtained by method number one. So we can consider the two methods as checkingand supporting each other. §
13. Conclusion
We have in the present article sought to develop the consequencies of the action S [ path ] noe being real but having an imaginary part S I [ path ] so that S [ path ] =8 H.B. Nielsen and M. Ninomiya S R [ path ] + iS I [ path ] using it in a Fenmann path way integral Z e i ~ S [ path ] Dpath (105)understood to be over paths extending over all times.Our first approximation result were that with a bit of optimistic we can makethe observable effects of the imaginary part of the action S I very small in spite ofthe fact that the supposedly huge factor ~ multiplying S I in the exponent e i ~ ( S R + iS I ) suggests that S I gives tremendous correction factors in the Feynman path integral.The strong suppression for truly visible effects which we have achieved in the presentarticle seems enough to optimistically say that it is not excluded that there couldindeed exist an imaginary part of the action in nature! Since we claimed that it isa less elegant assumption to assume the action real as usual than to allow it to becomplex, finding ways to explain that the S I should not yet have made itself clearlyfelt would imply that we then presumably have an imaginary component S I of theaction!The main speculations or assumptions arguing for the practical suppression ofall the signs of an imaginary action S I were:1) The classical equations of motion become -at least to a good approximation-given by the variation of the real part S R [ path ] of the action alone.2) While in the classical approximation the imaginary part S I rather selects whichclassical solution should be realized.3) Under the likely assumption that the possibilities to adjust a classical solutionto obtain minimal S I are better by finetuning the solution according to itsbehavior in the big bang time than today we obtain the prediction that themain simple features of the solution being realized will be features that couldbe called initial conditions in the sense of concerning a time in the far past.The properties of this solution at time t will be less and less simple -with lessand less recognizable simple features- as t increases. This is the second law ofthermodynamics being naturally in our model.4) To suppress the effects of S I sufficient it is important to have what we abovecalled case 1) meaning that there is one well defined classical path path min withabsolute minimal S I -except for a smaller amount of paths that follow this path path min except for shorter times- being realized. This case 1) is the oppositionto case 2) in which there are so many paths with a less negative S I that theyget more likely because of their large number in spite of the probability weight e − SI ~ being smaller.5) The contributions to the imaginary action S I from the relatively short timesover which we have proper knowledge -the time of the experiment or the his-torical times- are so small compared to the huge past and future time spansthat the understandable contributions, like the contribution S I during exp com-ming under an experiment, drowns and ends up having only small influence onwhich solution has the absolutely minimal S I . But the huge contributions fromfar future say we do not understand and in pracsis must consider random (thisactually gives us the randomness in quantum mechanics measurements). omplex Action, Prearrengement for Future and Higgs Broadening S I andpreferably stay there. Thus the state over long times should be an approxi-mately stable state. Such a prediction of approximate stability fits very wellwith that presently phenomenological models have a lower bound for the Hamil-tonian and being realized after a huge Hubble expansion having brought thetemperature to be so low that there is no severe danger for false vacua or otherinstabilities perhaps accessible if higher energies were accessible in particle col-lisions. By the Hubble expansion and the downward approximately boundedHamiltonian approximately simply vacuum is achieved. Imagining that the vac-uum achieved has been chosen via the choice of the solution to have very low (inthe sense of very negative) imaginary Lagrangian density L I such a situationwould just be favourable to reach the minimal S I .B) In interference experiments where often two for a usually short time separate(roughly) classical solutions or paths are needed for explaining the interferenceit is impossible to hope that huge S I -contributions from the long future and pasttime spans can overshadow ( ∼ dominate out) the imaginary action contribu-tion S I during exp comming in the interference experiment. The point is namelythat from the short time comming S I during exp can be different for the differentinterfering paths, but since these paths continue jointly as classical solutions inboth past and future they must get exactly the same S I -contributions from thehuge past and future time spans. Thus the difference between the S I during exp for the interfering paths cannot be dominated out by the longer time spansand their effect must appear to be observed as a disturbance of the interferenceexperiment.We discussed a lot what is presumably the most promissing case of seeing effects ofthe imaginary part of the action in an interference experiment: the broadening of theHiggs decay width. In fact an experiment in which a sharp invariant mass measure-ment of the decay products of a Higgs particle -say Higgs → γγ - is performed maybe considered a measurement of an interference between paths in which the Higgsparticle has “lived” longer or shorter. Since we suggest that the Higgs contributesrather much to S I the longer it “lives” the quantum amplitudes from the paths witha Higgs that live longer may be appreciably more suppressed than those with theHiggs being shorter living. This is what disturbs the interference and broadens theHiggs width.Our estimates lead to the expectation of crudely a shape of the Higgs peak beingmore like the square root of the Breit-Wigner form than as in the usual (i.e. S I = 0)just a Breit-Wigner.We hope that this Higgs broadening effect might be observable experimantally.In fact there were if we assume that the Higgs found in Aleph etc at LEP really werea Higgs some excess of Higgs-like events under the lower bound 114GeV for the masswhich could remind of the broadening.Presumably the here prefered case 1) is the right way -something that in principle0 H.B. Nielsen and M. Ninomiya might be settled if we knew the whole action form both real and imaginary part-but there is also the possibility of case 2) namely that the realized solution is notexactly the one with the absolutely lowest S I but rather has a somewhat higher S I being the most likely due to there being a much higher number of classical solutionswith this less extremal S I -value.While we in the case 1) only may see S I -effects via the interferences in pracsis,we may in case 2) possibly obtain a bias -a correction of the probabilities- due to the S I even if we for example measure the position of a particle prepared in a momentumeigenstate. Such effects might be easier to have been seen and we thus prefer to hopefor -or fit we can say- our model to work in case 1).It should be stressed that even with case 1) the arguments for the effects of S I being suppressed are only approximate and practical. So if there were for somereason an exceptional by strong S I contribution it could not be drowned in the bigcontributions from past and future but would show up by making the minimal S I solution be one in which this numerically very big contribution were minimal itself.This latter possibility could be what one saw when the Super conducting Super-collider (S.S.C.) had the bad luck of not getting funded. It would have produced somany Higgses that it would increased S I so much that it were basically not possibleto find the minimal S I solution as one with a working S.S.C..We should at the end mention that we have some other publications with variouspredictions which were only sporadically alluded to in the present article. For in-stance we expect the L.H.C. accelerator to be up to similar bad luck as the SSC andwe have even proposed a game of letting a random number deciding on restrictions-in luminosity or beam energy- on the running of L.H.C. so that one could in a cleanway see if there were indeed an effect of “bud luck” for such machines.With mild extra assumptions -that coupling constants may also adjust underthe attempt to minimize S I - we argued for a by one of us beloved assumption “Mul-tiple point principle”. This principle says that there are many vacua with -at leastapproximately- same vacuum energy density (= cosmological constant). Actuallywe have in an ealier article even argued that our model with such extra assumptioneven solves the cosmological constant problem by explaining why the cosmologicalconstant being small helps to make S I minimal. Since the “Multiple point pronci-ple” is promissing phenomenologically and of course small cosmological constantis strongly called for it means that the cosmological predictions including the Hubbleexpansion and the Hamiltonian bottom are quite in a good direction and supportour hypothesis of complex action. Acknowledgements
One of us (M.N) acknowledge the Niels Bohr Institute (Copenhagen) for theirhospitality extended to him. The work is supported by Grant-in-Aids for ScientificResearch on Priority Areas, Number of Area 763 “Dynamics of Strings and Fields”,from the Ministry of Education of Culture, Sports, Science and Technology, Japan.We also acknowledge discussions with colleagues especially John Renner Hansenabout the S.S.C. omplex Action, Prearrengement for Future and Higgs Broadening References
1) H. B. Nielsen and M. Ninomiya, “Future Dependent Initial Conditions from ImaginaryPart in Lagrangian”, Proceedings to the 9th Workshop “What Comes Beyond the StandardModels”, Bled, 16-26 September 2006, DMFA Zaloznistvo, Ljubljana, hep-ph/0612032.2) H. B. Nielsen and M. Ninomiya, “Law Behind Second Law of Thermodynamics-Unificationwith Cosmology”, JHEP ,03,057-072 (2006), hep-th/0602020.3) H. B. Nielsen and M. Ninomiya, “Unification of Cosmology and Second Law of Thermo-dynamics: Proposal for Solving Cosmological Constant Problem, and Inflation”, Progressof Theoretical Physics, Vol.116, No.5 (2006) hep-th/0509205, YITP-05-43, OIQP-05-09.4) J. B. Hartle and S. Hawking, Phys. Rev.
D28 ± ± B307
867 (1988).12) S. Coleman, Nucl. Phys.
B310
643 (1988).13) T. Banks, Nucl. Phys.
B309
493 (1988).14) S. W. Hawkings, Phys. Lett.134B