UUnsolved problems in particle physics
Sergey Troitsky
Institute for Nuclear Research of the Russian Academy of Sciences
Abstract
I consider selected (most important according to my own choice) unsolved prob-lems in particle theory, both those related to extensions of the Standard Model(neutrino oscillations, which probably do not fit the usual three-generation scheme;indications in favour of new physics from astrophysical observations; electroweaksymmetry breaking and hierarchy of parameters) and those which appear in theStandard Model (description of strong interactions at low and intermediate ener-gies).
Contents a r X i v : . [ h e p - ph ] D ec Introduction: status and parameters of the StandardModel
One may compare the current state of quantum field theory and its applications toparticle physics with the situation 20-30 years ago and discover, amusingly, that allprincipal statements of this field of physics are practically unchanged, which is in con-trast with rapid progress in condensed-matter physics. Indeed, most of the experiments,held during the last two decades, supported the correctness of predictions which hadbeen made earlier, derived from the models developed earlier. This success of the par-ticle theory resulted in considerable stagnation in its development. However, one mayexpect that in the next few years, the particle physics will again become an intensivelydeveloping area. Firstly, there is a certain amount of collected experimental results(first of all related to cosmology and astrophysics, but also obtained in laboratories)which suggest that the Standard Model (SM) is incomplete. Secondly, the theory wasdeveloping under the guidance of the principle of naturalness, that is the requirementto explain quantitatively any hierarchy in model parameters (in the case of SM, it ispossible only within a larger fundamental theory yet to be constructed). Finally, oneof the most important arguments for the coming excitement in particle physics is theexpectation of new results from the Large Hadron Collider. As it will become clearsoon, this accelerator will be able to study the full range of energies where the physicsresponsible for the electroweak symmetry breaking should appear, so we are expectinginteresting discoveries in the next few years in any case: either it will be the Higgsboson, or some other new particles, or (in the most interesting case) no new particlewill be found which would suggest a serious reconsideration of the Standard Model.The Large Hadron Collider (LHC, see e.g. [1]) is an accelerator which allows tocollide protons with the center-of-mass energy up to 14 TeV (currently working at7 TeV) and heavy nuclei. In a 30-km length tunnel, at the border of Switzerlandand France, there are four main experimental installations (general-purpose detectorsATLAS and CMS; LHCb which is oriented to the study of B mesons and ALICE,specialized in heavy-ion physics) as well as a few smaller experiments. The first resultsof the work of the collider have brought a lot of new information on particle interactionswhich we will mention when necessary.The purpose of the present review is to discuss briefly the current state of parti-cle physics and possible prospects for its development. For such a wide subject, theselection of topics is necessarily subjective and estimates of importance of particularproblems and of potential of particular approaches reflect the author’s personal opinion,while the bibliography cannot be made exhaustive.The contemporary situation in the particle physics may be described as follows.Most of the modern experimental data are well described by the Standard Model ofparticle physics which was created in 1970s. At the same time, there are a considerableamount of indications that SM is not complete and is not more than a good approx-imation to the correct description of particles and interactions. We are not speakingnow about minor deviations of certain measured observables from theoretically cal-culated ones – these deviations may be related to insufficient precision of either themeasurement or the calculations, to unaccounted systematic errors or insufficient setsof experimental data (statistical fluctuations); it happens that these deviations disap-pear after a few years of more detailed study. Contrary, we will emphasise more seriousqualitative problems of SM, the latter being considered as an instrument of quantitativedescription of elementary particles. These problems include the following:2 igure 1: Particle interactions. (1) experimental indications to the incompleteness of SM, namely the well-establishedexperimental observations of neutrino oscillations (which are impossible, see Sec. 2.4,in SM) and incapability of SM to describe the results of astrophysical observations, inparticular of those related to the structure and evolution of the Universe;(2) not fully natural and not calculable in the theory values of the SM parameters,in particular, the fermion mass hierarchy, the hierarchy of symmetry-breaking scalesand the absence of a light (with mass (cid:46)
100 GeV) Higgs boson;(3) purely theoretical difficulties in description of hadrons by means of the availablemethods of quantum field theory.We will discuss these unsolved problems of SM and related prospects for the devel-opment of the particle theory.For future reference, it is useful to recall briefly the structure of SM (see e.g. [2, 3]and the appendix to [4]). The model includes a certain set of particles and theirinteractions.Out of four known interactions (see Fig. 1), three are described by SM – the elec-tromagnetic, weak and strong ones. The first two of them have a common electroweak gauge interaction behind them. The symmetry of this interaction, SU (2) L × U (1) Y ,manifests itself at energies higher than ∼
200 GeV. At lower energies, this symmetryis broken down to U (1) EM (cid:54) = U (1) Y (the electroweak symmetry breaking); in SM, thisbreaking is related to the vacuum expectation value of a scalar field, the Higgs boson.Parameters of the electroweak breaking are known up to a high precision; experimentaldata are in a perfect agreement with the theory. The Higgs boson has not been observedyet; its mass, being a free parameter of the theory, is bound by direct experimentalsearches (see Table 1 and more details in Sec. 4.1).The strong interaction in SM is described by the quantum chromodynamics (QCD),a theory with the gauge group SU (3) C . The effective coupling constant of this theorygrows when the energy is decreased. As a result, particles which feel this interactioncannot exist as free states and appear only in the form of bound states called hadrons.Most of modern methods of quantum field theory work for small values of couplingconstants, that is, for QCD, at high energies.The fourth known interaction, the gravitational one, is not described by SM, butits effect on the microscopic physics is negligible.The particle content of SM is summarized in Fig. 2. Quarks and leptons, theso-called SM matter fields, are described by fermionic fields. Quarks take part instrong interactions and compose observable bound states, hadrons. Both quarks andleptons participate in the electroweak interaction. The matter fields constitute threegenerations; particles from different generations interact identically but have differentmasses. The full electroweak symmetry forbids fermion masses, so nonzero masses of3 igure 2: Particles described by the Standard Model. quarks and leptons are directly related to the electroweak breaking – in SM, they appeardue to the Yukava interaction with the Higgs field and are proportional to the vacuumexpectation value of the latter. For the case of neutrino, these Yukawa interactionsare forbidden as well, so neutrinos are strictly massless in SM. The gauge bosons,which are carriers of interactions, are massless for unbroken gauge groups U (1) EM (electromagnetism – photons) and SU (3) C (QCD – gluons); masses of W ± and Z bosons are determined by the mechanism of the electroweak symmetry breaking. AllSM particles, except for the Higgs boson, are found experimentally.From the quantum-field-theory point of view, quarks and leptons may be describedas states with definite mass. At the same time, gauge bosons interact with superposi-tions of these states; in another formulation, when the base is chosen to consist of thestates interacting with the gauge bosons, the SM symmetries allow not only for themass terms, m ii ¯ ψ i ψ i , for each i th fermion ψ i , but also for a nondiagonal mass matrix m ij ¯ ψ i ψ j . Up to unphysical parameters, in SM, this matrix is trivial in the leptonicsector, while in the quark sector it is related to the Cabibbo-Kobayasi-Maskava (CKM)matrix. The latter may be expressed through three independent real parameters (quarkmixing angles) and one complex phase (for more details, see [3, 5]).The Standard Model has therefore 19 independent parameters, values of 18 of whichare determined experimentally. They include three gauge coupling constants, α s , α and α for gauge groups SU (3) C , SU (2) W and U (1) Y , respectively (the latter two areoften expressed through the electromagnetic coupling constant α and the mixing angle θ W ), the QCD Θ-parameter, nine charged-fermion masses m u,d,s,c,b,t,e,µ,τ , three quarkmixing angles θ , , , one complex phase δ of the CKM matrix and two parametersof the Higgs sector, which are conveniently expressed through the known Higgs-bosonvacuum expectation value v and its unknown mass M H . Experimental values of theseparameters, recalculated from the 2010 data [6] (bounds on the mass of the Higgs bosonbased on LEP, Tevatron and LHC data are given as of December, 2011), may be foundin Table 1.It is worth reminding that the observable world is mostly mad eof atoms, so, out of4 s ( M Z ) = 0 . ± . /α ( M z ) = 127 . ± . θ W ( M Z ) = 0 . ± . (cid:46) − m u (2 GeV) = 2 . +0 . − . MeV m d (2 GeV) = 5 . +1 . − . MeV m s (2 GeV) = 105 +25 − MeV m c ( m c ) = 1 . +0 . − . GeV m b ( m b ) = 4 . ± .
023 GeV m t ( m t ) = 173 . ± .
35 GeV m e = 510 . ± . m µ = 105 . ± . m τ = 1 . ± . θ = 13 . ◦ ± . ◦ θ = 2 . ◦ ± . ◦ θ = 0 . ◦ ± . ◦ δ = 1 . ± . v ( m µ ) = 246 . ± .
002 GeV m H
115 GeV . . . 127 GeV
Table 1:
Parameters of the Standard Model. For parameters with significant energy depen-dence, the energy scales, to which the numerical values correspond, are given inparentheses. the full manifold of elementary particles, only few are met “in the everyday life”. Theseare u and d quarks in the form of protons (udd) and neutrons (uud), electrons and,out of interaction carriers, the photon. The reasons for that are different for differentparticles. In particular, neutrino does not interact with the electromagnetic field andis therefore very hard to detect; heavy particles are unstable and decay to lighter ones;strongly interacting quarks and gluons are confined in hadrons. The full manyfold of SMparticles reveal themselves either in complicated dedicated experiments, or indirectlyby their effect seen in astrophysical observations.Thus, before to proceed with the description of unsolved problems, let us recallthat all experimental results concerning physics of charged leptons, photons, W and Z bosons at all available energies and quarks and gluons at high energies are in excellentagreement with SM for a given set of its parameters. Let us discuss the unique, well-established in laboratory experiments, evidence in favourof incomleteness of SM, the phenomenon of neutrino oscillations, that is mutual con-version of neutrinos of different generations to each other. A more detailed moderndescription of the problem may be found in the book [7], in the Appendix to the text-book [4], in reviews [8, 9, 10] etc. 5 .1 Theoretical description.
In analogy with the case of charged leptons, let us consider three generations of neutrino:electron neutrino ( ν e ), muon neutrino ( ν µ ) and tau neutrino ( ν τ ). The correspondingfermion fields interact with the gauge bosons W and Z through weak charged andneutral currents. These interactions are responsible for both creation and experimentaldetection of neutrinos.Similarly to the quark case, one may suppose that neutrinos have a nonzero massmatrix (though it cannot be incorporated in SM, the low-energy effective theory, elec-trodynamics, does not forbid it) which may be nondiagonal. It is convenient to describethis system in terms of linear combinations ν , , of the original fields ν e,µ,τ with thediagonal mass matrix, ν i = (cid:88) α = e,µ,τ U iα ν α , where U iα , i = 1 , , α = e, µ, τ , are the elements of the leptonic mixing matrix.To demonstrate the phenomenon of neutrino oscillations, let us restrict ourselves tothe case of two flavours, ν e and ν µ . Let their linear combinations, ν = cos θ ν e + sin θ ν µ , (1) ν = − sin θ ν e + cos θ ν µ , be the eigenvectors of the mass matrix with eigenvalues m , m , respectively. Theinverse transformation expresses ( ν e , ν µ ) through ( ν , ν ): ν e = cos θ ν − sin θ ν ,ν µ = sin θ ν + cos θ ν . Suppose that at the moment t = 0, in a certain weak-interaction event, an electronneutrino ν e was created, that is the superposition of ν and ν with known coefficients: ν (0) = cos θ ν e (0) ,ν (0) = − sin θ ν e (0) . The evolution of mass eigenstates for a plane monochromatic wave moving in the di-rection z mas be described as ν i ( z, t ) = e − iωt + i √ ω − m i z ν i (0) , i = 1 , , where ω is the energy and (cid:113) ω − m i is the momentum. While propagating, the wavepackets corresponding to ν and ν disperse in different ways, so that the relation(cos θ , − sin θ ) between their coefficient no longer holds, which means that an ad-mixture of the orthogonal state, ν µ , appears. In the (commonly considered) ultrarel-ativistic limit, ω (cid:29) m i and (cid:113) ω − m i (cid:39) ω − m i ω . The probability to detect ν µ at apoint ( t, z ) for each emitted ν e is then P ( ν µ ; z, t ) = | ν µ ( z, t ) | = sin θ sin (cid:18) m − m ω z (cid:19) . (2)6ne may see that this probability is an oscillating function of the distance z , hence theterm “neutrino oscillations”. As expected, no oscillations happen either in the case ofequal (even nonzero) masses (similar dispersions of ν and ν ) or for a diagonal massmatrix ( θ = 0, ν = ν e etc.). A similar description of oscillations of three neutrinoflavours determines, in analogy with Eq. (1), three mixing angles θ , θ , θ .When individual neutrinos propagate to large distances, the oscillation formalismdescribed above stops to work, because the particles of different mass require differenttime to propagate from the source, hence loss of coherence; nevertheless the transfor-mations of neutrinos are possible and their probability is calculable. Let us turn now to the history (see e.g. [7]) and the modern state (see e.g. [11]) of thequestion of neutrino oscillations. In 1957, Pontecorvo [12, 13] suggested the possibilityof oscillations in the “neutrino–antineutrino” system, similar to K meson oscillationsalready known at that time. This first mentioning of the possibility of neutrino oscilla-tions was aimed at the explanation of preliminary Davis’ results about observation ofthe reaction ¯ ν + Cl → Ar + e − with reactor neutrinos. On one hand, this experi-mental result has not been confirmed; on the other one, it has become clear that thePontecorvo model was not able to describe it even if it were true. The first mention ofmutual transformations of ν e and ν µ is due to Maki, Nakagawa and Sakata [14], whilethe first succesful description of oscillations in the system of two-flavour neutrinos wasgiven by Pontecorvo [15] and by Gribov and Pontecorvo [16]. The theory of neutrinooscillations in its present form has been developed in 1975-76 by Bilenky and Pon-tecorvo [17, 18], Eliser and Swift [19], Fritch and Minkowski [20], Mikheyev, Smirnov[21, 22] and Wolfenstein [23].The first experimental evidence in favour of neutrino oscillations have been obtainedmore than a half century ago, though for considerable period of time their interpreta-tion remained an open question. We are speaking about the so-called “solar neutrinoproblem”: the observed flux of neutrinos form the Sun was considerably lower than itwas predicted by a model of solar nuclear reactions. This solar neutrino deficit was firstfound in the Homestake experiment (USA; first as early as 1968 [24]) and subsequentlyconfirmed by Kamiokande (Japan) [25], SAGE (Russia, Baksan neutrino observatory ofINR, RAS) [26], GALLEX/GNO (Italy, the Gran-Sasso laboratory) [27] and Super-K(Japan) [28] experiments, which made use of various experimental techniques and weresensitive to neutrinos form different nuclear reactions. Since only electron neutrinosare produced in the Sun, and only these were detected in the experiments, the deficitmight be explained by transformation of a part of electron neutrinos to muon ones.The natural source of muon neutrinos is provided by cosmic rays, that is chargedparticles (protons and nuclei) of extraterrestrial origin which interact with atoms inthe Earth’s atmosphere and produce secondary particles. A significant part of thelatters are charged π mesons. Neutrinos from decays of these π mesons, as well as fromdecays of secondary muons, are called atmospheric neutrinos. The first indications tooscillations of the atmospheric neutrinos have been obtained in the end of 1980s inKamiokande [29] and IMB [30] experiments, with subsequent confirmation in Soudan-2[31], MACRO [32] and Super-K [33]. Their result is the anisotropy in the flux of muonneutrinos: from above, that is from the atmosphere, the flux is higher than from below(through the Earth). Without oscillations, the flux were isotropic since it is determinedby an isotropic flux of primary cosmic rays while the interaction of neutrino with theterrestrial matter is negligible. This anisotropy is not seen for electron neutrinos, hence7 igure 3: Limits (95% confidence level) on the ν e − ν µ oscillation parameters as result fromthe analysis taking into account three neutrino flavours [36]. The dotted line cor-responds to the combination of solar experiments, the full line represents the Kam-LAND constraints, the gray ellipsis gives the constraints from the combination ofall data. The star, the triangle and the square correspond to the most probableoscillation parameters obtained in these three analyses, respectively. it is natural to suppose that ν µ oscillate mainly to ν τ (the latters were not detected inthese experiments).In the first decade of our century, a significant experimental progress in the questionswe discuss has been achieved, so that now we have a reliable experimental proof ofneutrino transformations with measured parameters. ν e − ν µ oscillations. In addition to more or less model-dependent results aboutthe solar neutrino deficit ( ν e disappearence ), the SNO experiment has detected, in 2001[34], appearence of neutrino of other flavours from the Sun in a full agreement with theflux expected in the oscillational picture. It has therefore closed the “solar neutrinoproblem” and supported, at the same time, the standard solar model. The KamLANDexperiment [35] registered disappearence of electron antineutrino born in reactors ofatomic power plants (in contrast with the case of the Sun, the initial flux of the particlesmay be directly determined in this case). Parameters of oscillations, measured in thesevery different experiments, are in excellent agreement, see Fig. 3. The SNO results,together with even more precise results of the BOREXINO experiment (Italy) [37],confirm the expected energy dependence of the number of disappeared solar neutrinosin agreement with predictions of Mikheyev, Smirnov [21, 22] and Wolfenstain [23], whodeveloped a theory of neutrino oscillations in plasma: due to the fact that electronsare present in plasma, unlike muons and tau leptons, the interaction with mediumgoes differently for different types of neutrino. As a result, the oscillation formalism ismodified and the resonance enhancement of oscillations becomes possible. ν µ − ν τ oscillations. In addition to the Super-K experiment, which have measured[38, 39] deviations from isotropy in atmospheric ν µ and ¯ ν µ to a great accuracy, thedisappearence of ν µ has been measured directly in neutrino beams created by particleaccelerators (experiments K2K [40] and MINOS [41]), see Fig. 4. Finally, in 2010, theOPERA detector which is located in the Gran Sasso laboratory (Italy) has detected8 igure 4: Limits (90% confidence level) on the ν µ − ν τ oscillation parameters. The dottedline represents the results of the SuperK analysis with account of three neutrinoflavours [39]; the full line represents constraints by MINOS [42]. The star and thetriangle denote the most probable oscillation parameters for these two analyses,correspondingly. ∆ m = (cid:0) . +0 . − . (cid:1) × − eV ∆ m = (cid:0) . +0 . − . (cid:1) × − eV sin θ = 0 . +0 . − . sin θ = 0 . ± . θ = 0 . +0 . − . Table 2:
Parameters of oscillations of three flavours of neutrino obtained with account of allrelevant experimental data as of summer 2011 [47]. [43] the first (and currently unique) case of appearence of ν τ in the ν µ beam from theSPS accelerator (CERN, Switzerland). The mixing angle θ . For a long time, the solar ( ν e − ν µ ) and atmospheric( ν µ − ν τ ) oscillation data have been described independently (see discussions in [4],Appendix C) while relatively low precision of experiments allowed for zero value of themixing angle θ . The situation has been recently changed and, analyzed commonly, thedata of various experiments point to nonzero θ [44]. In summer 2011, two acceleratorexperiments, T2K (Japan) [45] and MINOS [46], which both search for appearenceof ν e in ν µ beams, published their results which are incompatible with θ = 0. Aquantitative analysis of all data on the solar and atmospheric neutrinos, jointly withthe accelerator and reactor experiments which study the same part of the parameterspace, points [47] towards a nonzero value of θ at the confidence level better than99%. In Table 2, the results of this analysis are quoted.9 .3 Experimental results: non-standard oscillations. The combination of all experiments described above is in a good quantitative agreementwith the picture of oscillations of three types of neutrino with certain parameters.However, there exist results which do not fit this picture and may suggest that thefourth (or, maybe, even the fifth) neutrino exists. As we have seen above, one of theprincipal oscillation parameters is the mass-square difference, ∆ m ij = m j − m i . Theresults on atmospheric and solar neutrinos, jointly with the accelerator and reactorexperiments, are explained by two unequal ∆ m ij , see Table 2,∆ m (cid:28) ∆ m ∼ × − eV . In the case of three neutrinos, these two values compose the set of linearly independent∆ m ij and | ∆ m | = | ∆ m − ∆ m | ∼ ∆ m . Therefore, the observation of any neutrino oscillations with ∆ m ij (cid:29) ∆ m implieseither the existence of a new neutrino flavour ( i, j >
3) or some other deviation fromthe standard picture. On the other hand, there is a very restrictive bound on the numberof relatively light ( m i < M Z /
2) particles with the quantum numbers of neutrino. Thisbound comes from precise measurements of the Z -boson width and implies that thereare only three such neutrinos. This means that the fourth neutrino, if exists, does notinteract with the Z boson; in other words, it is “sterile”. We will turn now to a certainexperimental evidence in favour of ∆ m ij ∼ . We note that the oscillations relatedto this ∆ m ij should reveal themselves at relatively short distances and may be detectedin so-called short-baseline experiments.¯ ν µ − ¯ ν e oscillations. The LSND experiment [48] studied muon decay at rest, µ + → e + ν e ¯ ν µ , and measured the ¯ ν e flux at the distance about 30 m from the placewhere muons were held. The excess of this flux over the background rate has beendetected and interpreted as appearence of ¯ ν e as a result of ¯ ν µ oscillations, for a range ofpossible parameters. A similar experiment, KARMEN [49] excluded a significant partof this parameter space, however, in 2010, the MiniBooNE experiment [50] has alsodetected an anomaly which is compatible with the LSND results and, within statisticaluncertainties, does not contradict KARMEN for a certain range of parameters (Fig 5).Another group of short-baseline experiments which study possible ¯ ν e − ¯ ν µ oscillationssearch for disappearence of ¯ ν e in the antineutrino flux from nuclear reactors. Theseexperiments continued for decades; recently, their results have been reanalized jointly[52] with a more precise theoretical calculation of the expected fluxes. It has been shownthat there is a statistically significant deficit of ¯ ν e in the detectors which is compatiblewith ∆ m ∼ , – the so-called reactor neutrino anomaly. The corresponding limitson the parameters are also shown in Fig. 5 for convenience. However, one should keep inmind that while LSND, KARMEN and MiniBooNE detected ¯ ν e in the ¯ ν µ flux, thereforeconstraining ¯ ν e − ¯ ν µ oscillations, the reactor experiments only fix the disappearance of¯ ν e . While the lack of this disappearence excludes ¯ ν e − ¯ ν µ oscillations, the presence ofit may be explained as a transformation of ¯ ν e into antineutrino of any other type.One can see that there are several independent indications in favour of ∆ m ∼ ,which, as we discussed above, require either introduction of more-than-three neutrinoflavours or (see below) some other new physics.10 igure 5: Limits (90% confidence level) on the parameters of ¯ ν µ − ¯ ν e oscillations. The shadedregion is compatible with the LSND signal [48]; the region inside the dotted curve– with the MiniBooNE signal [51]. Thin full lines bound the region of parameterscompatible with a joint re-analysis of reactor data [52], see text. The KARMEN2experiment excludes [49] the region above and to the right from the thick full line. Other anomalies.
Recent intense exploration of the field of neutrino oscillationsrevealed also a range of other anomalies which are currently being discussed and re-checked intensively.
Possible difference between neutrino and antineutrino oscillations.
The MiniBooNEexperiment studied separately neutrino and antineutrino beams. The ¯ ν e appearence hasbeen detected [50, 51] while that of ν e has not [53] (see Fig. 6). In assumption of equaloscillation parameters for ν and ¯ ν , the MiniBooNE result contradicts to LSND, butwithout this assumption, contrary, the LSND claim is supported. It is worth notingthat the MINOS experiment also performed separate measurements with neutrino andantineutrino beams (studying a range of much smaller ∆ m ); first their results for thetwo cases were incompatible at the 98% confidence level, however, subsequent analysisof a larger amount of data did not confirm this difference [54]. The latter result agreeswith Super-K: though this experiment cannot distinguish neutrino from antineutrino ineach particular case, it may limit [55] antineutrino oscillation parameters statistically,on the basis of a known contribution of ¯ ν µ to the atmospheric neutrino flux. Calibration of gallium detectors.
The GALLEX [56] and SAGE [57] experiments,constructed to detect solar neutrinos with the help of the gallium detectors, calibratedtheir instruments with the help of artificial sources of radioactivity. They detected adeficit of electron neutrino compatible with oscillations with ∆ m (cid:38) . (see also[58]). This mass-square difference, which by itself does not agree with the standardthree-neutrino oscillation picture, agrees with the antineutrino results of LSND, Mini-BooNE and reactor experiments, however the corresponding mixing angle differs fromthe predictions of the latters [59]. Other puzzles.
When speaking about unexplained results of neutrino experiments,one may mention also the unexpected excess of events with energies (cid:46)
400 MeV detectedby MiniBooNE for neutrinos [60] and antineutrinos [51]; possible seasonal variationsof the neutrino flux in the Troitsk- ν mass [61] and MiniBooNE [62] experiments; theresult of the OPERA experiment [63] which measured the velocity of muon neutrinos11 igure 6: Limits (90% confidence level) on the ν µ − ν e and ¯ ν µ − ¯ ν e oscillation parameters. Theshaded area corresponds to the part of the parameter space which is excluded forneutrino oscillations by MiniBooNE [53] and KARMEN [49], while thick contourslimit the region which corresponds to the signal in antineutrino oscillations (fulllines, MiniBooNE [51]; dotted line, LSND [48]). which happened to be large than the speed of light. All these very interesting anomaliescurrently await their confirmation in independent experiments. Possible theoretical explanations.
The experimental results listed above are ratherhard to explain. On one hand, a series of experiments suggest neutrino transforma-tions compatible with ∆ m (cid:38) . which cannot be described in the frameworksof a standard three-generation scheme. On the other hand, the addition of the fourthneutrino cannot help to explain the difference between the neutrino and antineutrinooscillations [64, 65]. Alternatively, one can consider (a) two generations of sterile neu-trinos (see e.g. [66] and references therein); (b) breaking [67] of the CPT invariance ,or (c) nonstandard interaction of neutrino with matter which may distinguish particlesand antiparticles [71, 72]. A critical analysis of some of these suggestions may be founde.g. in [66, 73, 74]. These scenarios experience considerale difficulties with simultane-ous explanation of the full set of the experimental data, though they cannot be totallyexcluded; it might happen that a certain combination of these possibilities is realizedin Nature.A confirmation of the result about superluminal neutrino motion would requirea serious reconsideration of basic ideas of particle physics. A successful theory whichexplains quantitatively the OPERA result should also agree with very restrictive boundson the Lorentz-invariance violation in the sector of charged particles, with the absence Invariance with respect to simultaneous charge conjugation (C) and reflection of both space (P) andtime (T) coordinates is (see e.g. [68]) a fundamental symmetry which inevitably exists in any (3+1)-dimensional Lorentz-invariant local quantum field theory. However, there exist phenomenologicallyacceptable models with the CPT violation (either with higher number of space dimensions, or with theLorentz invariance violation, or with a nonlocal interaction). In the context of the neutrino oscillations,they are discussed e.g. in [67, 69, 70].
12f dispersion of the neutrino signal from the supernova 1987A and with absence ofintense neutrino decays which are characteristic for many models with deviations fromthe relativistic invariance.
Conversions of neutrino of one type to another are experimentally proven and the setof numerous independent and very different experiments are in a good agreement withthe oscillation picture. The oscillatory behaviour of the neutrino conversions is provenby a comparison of the results obtained for different energies (cf. the argument of thesine squared in Eq. (2)). The last step is to measure the neutrino flux at differentdistances along a single path (the distance dependence in Eq. (2)) which is planned forthe nearest future. Up to this last detail, the neutrino oscillations are experimentallyconfirmed. Since the oscillations are possible only for different masses of neutrinos ofdifferent types, they prove also that (at least some of) the neutrino masses are nonzero.At the same time, direct experimental searches for neutrino masses have not beensuccesful yet; the most restrictive bounds, put by the Troitsk- ν mass (INR RAS) andMainz experiments, where the tritium beta decay was studied, are m ν e (cid:46) (cid:80) i m ν i (cid:46) .
35 eV.At the same time, in SM the lepton numbers are conserved separately for each gen-erations, that is changes of the neutrino flavour are forbidden. By making use of theSM fields, it is impossible to construct a gauge invariant renormalizable interaction re-sulting in the neutrino mass, even after the electroweak symmetry breaking. Therefore,neutrino oscillations represent an experimental proof of the incompleteness of SM.How can one modify SM to have massive neutrinos? First note that at energiesbelow the electroweak breaking scale, the neutrino field is gauge invariant, it is un-charged and colorless. For such fermion fields one may write two kinds of mass terms,namely the Dirac term m D ¯ ν R ν L (all charged SM fermions have similar masses) and theMajorana one, m M ν L Cν L , where C is the charge conjugation matrix and ν L , ν R denotethe left-handed and right-handed neutrino spinors, respectively.In SM, only left-handed neutrinos are present, therefore to have Dirac masses, onemust introduce new fields ν R ,i . At first sight, the Majorana mass does not requirenew fields; however, like the Dirac one, it cannot be obtained from a renormalizableinteraction. Going beyond the renormalizability means that SM is a low-energy limitof a more complete theory (like the non-renormalizable Fermi theory is a low-energylimit of SM), so it is again inevitable to introduce new fields. In any case, neutrinos areseveral orders of magnitude lighter than the charged fermions and a succesful theoryof neutrino masses should be able to explain this fact (see also Sec. 4.3). While laboratory experiments in particle physics give only limited indications to theincompleteness of SM (neutrino oscillations being the main one), most scientists areconfident that a more complete theory should be invented. The main reason for thisconfidence comes from astrophysics and cosmology. In recent decades, intense devel-13pment of the observational astronomy in various energy bands has forced cosmology(that is the branch of science studying the Universe as a whole) to become an accuratequantitative discipline based on precise observational data (see e.g. textbooks [4, 78]).Today, cosmology has its own “standard model” which is in a good agreement withmost observational data. The basis of the model is a concept of the expanding Uni-verse which, long ago, was very dense and so hot that the energy of thermal motion ofelementary particles did not allow them to compose bound states. As a result, it werethe particle interactions which determined all processes and, in the end, influenced theUniverse development and the state of the world as we observe it today. The expandingUniverse cooled down and particles were unified into bound states – first atomic nucleifrom nucleons, then atoms from nuclei and electrons. Unstable particles decayed andthe Universe arrived to its present appearence. As we will see below, presently theUniverse expands with acceleration and is comprised mainly from unknown particles.Even a dedicated book would be insufficient to describe all aspects of interrelationsbetween cosmology and particle physics (the readers of
Physics Uspekhi might be in-terested in reviews [79, 80]). Here, we will briefly consider three principal observationalindications in favour of physics beyond SM, namely, the baryon asymmetry of the Uni-verse, the dark matter and the accelerated expansion of the Universe (both the relatednotion of the dark energy and physical reasons for inflation).
Quark-antiquark pairs had to be created intensively in the hot early Universe. TheUniverse then expanded and cooled down, quarks and antiquarks annihilated and thesurvived ones composed baryons (protons and neutrons). Notably, there are very fewantibaryons in the present Universe, which means that at the early stages, there weremore quarks than antiquarks. One can determine, by which amount: the number ofquark-antiquark pairs was of the same order as the number of photons while the baryon-photon ratio may be determined from the analysis of the cosmic microwave backgroundanisotropy and from studies of primordial nucleosynthesis. The ratio of the excess ofquark number n q over the antiquark number, n ¯ q , is of order n q − n ¯ q n q + n ¯ q ∼ − , that is a single “unpaired” quark was present for each ten billion quark-antiquark pairs.It is hard to imagine that this tiny excess of matter over antimatter was present in theUniverse from the very beginning; moreover, a number of quantitative cosmologicalmodels predict exact baryon symmetry of the very early Universe. Looks like theasymmetry appeared in course of the evolution of the Universe. For this to happen,the following Sakharov conditions [81] should be fulfilled:1. baryon number nonconservation;2. CP violation;3. breaking of thermodynamical equilibrium.Though the classical SM lagrangian conserves the baryon number, nonperturbativequantum corrections may break it, that is the condition 1 may be fulfilled in SM.The source of CP violation (condition 2) is also present in SM, it is the phase in thequark mixing matrix. Finally, in the course of the evolution of the Universe, the state14ith the zero vacuum expectation value of the Higgs field (high temperature) has beenreplaced by the present state. It can be shown (see e.g. [82] and references therein),that the thermodynamic equilibrium was strongly broken at that moment, if it were afirst-order phase transition. Therefore, in principle, all three conditions might be metin SM. However, it has been shown that the first-order electroweak phase transitionin SM is possible only for the Higgs boson mass M H (cid:46)
50 GeV which was excludedfrom direct searches long ago. Also, the amount of CP violation in the CKM matrixis insufficient. We conclude that the observed baryon asymmetry of the Universe isan indication to the incompleteness of SM. A particular mechanism of generation ofthe baryon asymmetry is yet unknown (it should also explain the smallness of theasymmetry amount, ∼ − ). Study of dynamics of astrophysical objects (galaxies, galaxy clusters) and of the Uni-verse as a whole allows one to determine the distribution of mass which may be sub-sequently compared to the distribution of the visible matter. Various independentobservational data point to the estimate that the contribution of the visible matter(mostly baryons) to the energy density of the Universe is five times smaller than thecontribution of invisible matter. We will first briefly discuss modern observational evi-dence for existence of the dark matter and then proceed to the discussion of implicationsof these observations to the particle physics.
1. Rotation curves of galaxies.
Serious attention has been attracted to the questionof the invisible matter after the analysis of rotation curves of galaxies (see e.g. [83])(Fig. 7). For nearby galaxies it is possible to measure, by making use of the Dopplereffect, the velocities of stars and gas clouds at different distances from the galaxy center,that is from the rotation axis. The Newton law of gravitation allows to estimate thedistribution of mass as a function of the distance from the center; it was found thatin the outer parts of galaxies, where luminous matter is practically absent, there is asignificant mass density, so that the visible part of a galaxy is embedded into a muchlarger invisible massive halo. These measurements have been performed for manygalaxies, in particular for our Milky Way.
2. Dynamics of galaxy clusters.
In a similar way (though based on completelydifferent observations), it is possible to determine the mass distribution in galaxy clus-ters. This provided for the historically first argument in favour of dark matter [86].Modern observations have demonstrated that the main part of the baryonic mattersits not in the star systems, galaxies, but in hot gas clouds in the intergalactic space.This gas emits X rays, so the observations allow to reconstruct the distribution of theelectron density and temperature. From the latter, by making use of the conditions ofhydrostatic equilibrium, the mass distribution may be determined. Comparison withthe distribution of the luminous matter (that is, mostly of the gas) points again tothe existence of some hidden mass. A similar, though less precise, conclusion may beobtained from the analysis of velocities of galaxies inside a cluster.
3. Gravitational lensing.
It may happen that a massive object (e.g. a galaxy cluster)is located between a distant source (e.g. a galaxy) and the observer. According to thegeneral relativity, the light from the source is deflected by the massive object, so thelatter may serve as a gravitational lens, which produces several distorted images ofthe source. A joint analysis of images of several sources allows one to model the massdistribution in the lens and to compare it with the distribution of visible matter (seee.g. [87]). The baryon distribution is reconstructed from X-ray observations of the15 igure 7:
Some of the first indications to the existence of dark matter have been obtainedfrom the analysis of the rotation curves of galaxies. Observational data on therotation velocity as a function of the distance to the axis, given here for the galaxyNGC 3198 (dots), are not described by the curve which represents the expectedvelocity calculated from the distribution of luminous matter (lower line; data andcalculation from [84]). At distances (cid:38)
10 kpc, the luminous matter is practicallyabsent (as one may see at the lower photograph taken from the digitalized Palomarsky atlas [85]), but the rotation velocities of gas clouds seen in the radio band arealmost constant. This indicates that at periphery of the galaxy, there is a significantconcentration of mass (the so-called halo). igure 8: The galaxy cluster Abell 1689. The background image of the cluster in the opticalband was obtained by the Hubble Space Telescope (image from the archive [88]).The contours describe the model of mass distribution (full curves, Ref. [87]) basedon the gravitational lensing and the distribution of luminous gas observed in X rays(dotted curves based on data from the Chandra X-ray telescope archive, Ref. [89]).Currently this mass model is one of the most precise ones. luminous gas which contains about 90% of the cluster mass (Fig. 8). The full mass ofthe cluster calculated in this way far exceeds the mass of the baryons obtained fromobservations.
4. Colliding clusters of galaxies.
One of the most beautiful observational proofsof the existence of dark matter is [90] the observation of colliding clusters of galaxies(Fig. 9). Contrary to the case of a usual cluster, Fig. 8, one does not need to calculatethe mass in this case: comparison of the mass distribution and the gas distributiondemonstrates that the main part of the mass of the clusters and that of luminous matterare located in different places. The reason for this dramatic difference, not seen innormal, noninteracting clusters, is related to the fact that the dark matter, constitutingthe dominant part of the mass, behave as a nearly collisionless gas. During the collisionof clusters, the dark matter of one cluster, together with rare – and therefore alsocollisionless – galaxies kept by its gravitational potential, went through another one,while the gas clouds collided, stopped and were left behind.These results, both by themselves and in combination with other results of quanti-tative cosmology (first of all those obtained from the analysis of the cosmic microwavebackground and the large-scale structure of the Universe, see e.g. [4]), point unequiv-ocally to the existence of nonluminous matter. One should point out that the terms“dark”, or “nonluminous”, mean that the matter does not interact with the electro-magnetic radiation and not only happens to be in a non-emitting state. Indeed, itshould not also absorb electromagnetic waves, since otherwise the induced radiationwould inevitably appear. The usual matter, that is baryons and electrons, may beput in this state only if packed in compact, very dense objects (neutron stars, brown17 igure 9:
As in Fig. 8 but for colliding clusters 1E 0657–558 (the mass distribution modelfrom [91], optical and X-ray images from archives [88, 89] correspondingly). Squaresdenote the positions of maxima of the mass distributions; diamonds denote thepositions of maxima of the gas emission. dwarfs etc.) which should be located in the halo of our Galaxy, as well as in othergalaxies and in the intergalactic space within clusters. One may estimate the amountof these objects which is required to explain the observational results concerning thenonluminous matter. This amount appears to be so large that these compact objectsshould often pass between the observer and some distant sources, which should resultin temporal distortion of the source image because of the gravitational lensing (theso-called microlensing effect). These events have been indeed observed, but at a verylow rate which allows for a firm exclusion of this explanation for dark matter [92].We are forced to say that, probably, the dark matter consists of new, yet unknown,particles, so that its explanation requires to extend SM. The dark-matter particlesshould be (almost) stable in order not to decay during the lifetime of the Universe( ∼
14 billion years). These particles should also interact with the ordinary matteronly very weakly to avoid direct experimental detection (direct searches for the darkmatter, which should exist everywhere, in particular in laboratories, last already fordecades). A number of theoretical models of the dark-matter origin predict the massof the new particle between ∼ ∼ R parity (see Sec. 4.2 below). The LSP cannot decay because the conservationof R parity requires that among decay products, at least one supersymmetric particleshould be present, while all other supersymmetric particles are heavier by definition(in the same way, the electric-charge conservation provides for the electron stabilityand the baryon-number conservation provides for the stability of the proton). In awide class of models the LSP is an electrically neutral particle (neutralino) which isconsidered a good candidate for a dark-matter particle. Note that there is a plethoraof other scenarios in which the dark-matter particles have very different masses, from10 − eV (axion) to 10 eV (superheavy dark matter). Also, in principle, the darkmatter may consist of large composite particles (solitons).18 .3 Accelerated expansion of the Universe In this section, we briefly discuss several technically interrelated problems which concernone of the least understandable, from the particle-physics point of view, part of themodern cosmology. They include:1. observation of the accelerated expansion of the Universe (“dark energy”);2. weakness of the effect of the accelerated expansion as compared to typical scalesof particle physics (the cosmological-constant problem);3. indications to intense accelerated expansion of the Universe at one of the earlystages of its evolution (inflation).Let us start with the observational evidence in favour of (recent and present) accel-erated expansion of the Universe.
1. The Hubble diagram.
The first practical instrument of quantitative cosmology,the Hubble diagram plots distances to remote objects as a function of the cosmologicalredshift of their spectral lines. It was the way to discover the expansion of the Universeand to measure its rate, the Hubble constant. When methods to measure distances tothe objects located really far away became available for astronomers, they found (seee.g. [93, 94]) deviations from a simple Hubble law which indicate that the expansionrate of the Universe changes with time, namely the expansion accelerates. The methodof distance determination we are speaking about is based on the study of type Iasupernovae and deserves a brief discussion (see also [95]).A probable mechanism of the type Ia supernova explosion is the following. A whitedwarf (a star at the latest stage of its evolution in which nuclear reactions have stopped)is rotating in a dense double system with a normal star. The matter from the normalstar flows to the white dwarf and increases its mass. When the mass exceeds the so-called Chandrasekhar limit (the limit of stability of a white dwarf whose value depends,in practice, on the chemical composition of the star only), intense thermonuclear re-actions start and the white dwarf explodes. It is interesting and useful to note that,therefore, in all cases the exploding stars have roughly the same mass and constitution(up to details of the chemical composition). As a consequence, all type Ia supernovaexplosions resemble each other not only qualitatively, but quantitatively as well: theenergy release is roughly the same; the time dependence of the luminosity is similar.Even more amusing is the fact that even for rare outsiders (which differ from the mostof supernovae either by the chemical composition or by some random circumstances),all curves representing the luminosity as a function of time are homothetic (Fig. 10),that is map one to another at a simultaneous scaling of both time and luminosity.It means that, upon measurement of a lightcurve of any type Ia supernova, one maydetermine its absolute luminosity with a good precision. Then, comparison with theobserved magnitude allows to determine the distance to the object. In this way it ispossible to construct the Hubble diagram (Fig. 11) which demonstrates statisticallysignificant deviations from the law which corresponds to the uniform (or decelerated)expansion of the Universe.
2. Gravitational lensing.
The method of the gravitational lensing, discussed above,allows not only for reconstruction of the mass distribution in the lensing cluster ofgalaxies, but also for determination of geometrical distances between sources, the lensand the observer. If redshifts of the source and the lens are known, one may compare The Nobel prise, 2011. igure 10: Temporal dependence of the absolute magnitude of type Ia supernovae.
Above: lightcurves of 68% of supernovae are contained within the shaded band, howeverthere are very rare outsiders (for example, lightcurves of an unusually bright super-nova SN1991T (squares) and an unusually weak supernova SN1986G (triangles)are presented); light curves and the band are taken from [96].
Below: the samecurves but scaled simultaneously in the horizonthal (time) and vertical (luminos-ity) axes according to the rules described in [97]. Introduction of this correctionshifts all “exclusive” curves to the band. Therefore, to know the absolute value ofthe luminosity, it is sufficient to measure the shape of the light curve.
Figure 11:
The Hubble diagram, presenting the dependence of the distance from the redshift z of spectral lines of distant galaxies, obtained from observations of type Ia super-novae. Gray lines correspond to data on individual supernovae with experimentalerror bars [98]. The uniform expansion of the Universe corresponds to the lower(dotted) line, the accelerated expansion – to the upper (full) line.
3. Flatness of the Universe and the energy balance.
A number of measurementspoint to the spatial flatness of the Universe, that is to the fact that its three-dimensionalcurvature is zero. The main argument here is based on the analysis of the cosmic mi-crowave background anisotropy [100]. In the past, the Universe was denser and hotterthan now. Various particles (photons in particular) were in thermodynamical equilib-rium, so that the distribution of photons over energies was Planckian, correspondingto the temperature of the surrounding plasma. The Universe cooled down while ex-panding and at some moment, electrons and protons started to join into hydrogenatoms. Compared to plasma, the gas of neutral atoms is practically transparent forradiation; since then, photons born in the primordial plasma propagate almost freely.We see them as the cosmic microwave background (CMB) now. At the moment whenthe Universe became transparent, the size of the causally connected region (that is theregion which a light signal had time to cross since the Big Bang), called a horizon, wasonly ∼
300 kpc. This quantity may be related to a typical scale of the CMB angularanisotropy; the present Universe is much older and we see at the same moment manyregions which had not been causally connected in the early Universe. This angularscale has been directly measured from the CMB anisotropy. The theoretical relationbetween this scale and the size of the horizon at the moment when the Universe becametransparent is very sensitive to the value of the spatial curvature; the analysis of thedata from WMAP satellite points to a flat Universe with a very high accuracy.Other methods exist to test the flatness of the Universe. One of the most beautifulamong them is the geometric Alcock-Paczinski criterion. If it is known that an objecthas a purely spherical shape, one may try to measure its dimensions along the line ofsight and in the transverse direction. Taking into account distortions related to the ex-pansion of the Universe, one may compare the two sizes and constrain the cosmologicalparameters, first of all, deviations from flatness. Clearly, it is not an easy task to findan object whose longuitudinal and transverse dimensions are certainly equal; however,one may measure characteristic dimensions of some astrophysical structures which, av-eraged over large samples, should be isotropical. The most precise measurement of thiskind [101] uses double galaxies whose orbits are randomly oriented in space while theorbital motion is described by the Newton dynamics.From the general-relativity point of view, the flat Universe represents a ratherspecific solution which is characterized by a particular total energy density (the so-calledcritical density, ρ c ∼ × − GeVcm ). At the same time, the estimates of the energydensity related to matter contribute ∼ . ρ c , that is the remaining three fourths ofthe energy density of the Universe are due to something else. This contribution, whoseprimary difference form the matter contribution is in the absence of clustering (that isof concentration in stars, galaxies, clusters etc.), carries a not fully successful name of“dark energy”.The question about the nature of the dark energy is presently open. The technicallymost simple explanation is that the accelerated expansion of the Universe results froma nonzero vacuum energy (in general relativity, the reference point on the energy axis isrelevant!), that is the so-called cosmological constant. From the particle-physics pointof view, the dark-energy problem is, in this case, twofold. In the absence of specialcancellations, the vacuum energy density should be of order of the characteristic scaleof relevant interactions Λ, that is ρ ∼ Λ c (cid:126) . ρ corresponds to Λ ∼ − eV, while characteristic scales of thestrong (Λ QCD ∼ eV) and electroweak ( v ∼ eV) interactions are many orders ofmagnitude higher. One side of the problem (known for a long time as “the cosmological-constant problem”) is to explain how the contributions of all these interactions tothe vacuum energy cancel. In principle, some symmetry may be responsible for thiscancellation: for instance, the energy of a supersymmetric vacuum in field theory isalways zero. Unfortunately, the supersymmetry, even if it has some relation to the realworld, should (as discussed in Sec. 4), be broken at the scale not smaller than ∼ v , andthe contributions to the vacuum energy should have then the same order. On the otherhand, the observed accelerated expansion of the Universe tells us that the cancellationis not complete and hence there is a new energetic scale in the Universe ∼ − eV. Theexplanation of this scale is a task which cannot be completed within the frameworks ofSM, where all parameters of the dimension of energy are orders of magnitude higher. Ifthis scale is given by a mass of some particle, the properties of the latter should be veryexotic in order both to solve the problem of the accelerated expansion of the Universeand not to be found experimentally. For instance, one of suggested explanations [102]introduces a scalar particle whose effective mass depends on the density of medium(this particle is called “chameleon”). By itself, the dependence of the effective mass onthe properties of medium is well-known (for instance, the dispersion relation of photonin plasma is modified in such a way that it gets a nonzero effective mass). In our case,due to interaction with the external gravitational field, the chameleon has a short-distance potential in relatively dense medium (e.g. at the Earth), which prohibits itslaboratory detection, but at large scales of the (almost empty) Universe the effect of thisparticle becomes important. One should also note that a solution to the problem of theaccelerated expansion of the Universe might have nothing to do with particle physicsat all and be entirely based on peculiar properties of the gravitational interaction (forinstance, on deviations from the general relativity at large distances).However, the problem of the accelerated expansion of the Universe is not exhaustedby the analysis of the modern state. There are serious indications that, at an earlystage of its evolution, the Universe experienced a period of intense exponential expan-sion, called inflation (see e.g. [78, 103]. Though the inflation theory is currently not apart of the standard cosmological model (it awaits more precise experimental tests), itsolves a number of problems of the standard cosmology and, presently, does not havean elaborated alternative. Let us list briefly some problems which are solved by theinflationary model.1. As it has been already pointed out, various parts of the presently observed Uni-verse were causally disconnected from each other in the past, if one extrapolatesthe present expansion of the Universe backwards in time. Information betweenthe regions which are now observed in different directions could not be transmit-ted, for instance, at the moment when the Universe became transparent for CMB.At the same time, the CMB is isotropic up to a high level of accuracy (relativevariations of its temperature does not exceed 10 − ), the fact that indicates to thecausal connection between all currently observed regions.2. Zero curvature of the Universe, from the theoretical point of view, is not singledout by any condition: the Universe should be flat from the very beginning, nobodyknows why.3. The modern Universe is not precisely homogeneous – the matter is distributedinhomogeneously, being concentrated in galaxies, clusters and superclusters of22alaxies; a weak anisotropy is observed also in CMB. Most probably, these struc-tures were developed from tiny primordial inhomogeneities, whose existence shouldbe assumed as the initial condition.These and some other arguments point to the fact that the initial conditions for thetheory of a hot expanding Universe had to be very specific. A simultaneous solutionto all these problems is provided by the inflationary model which is based on theassumption about an exponential expansion of the Universe which happened before thehot stage. From the theory point of view, this situation is fully analogous to the presentaccelerated expansion, but the energy density, which determines the acceleration rate,was much higher. It may be related to the presence of a new, absent in SM, scalar field,the inflaton. If it has a relatively flat (that is, weakly depending from the field value)potential and the value itself slowly changes with time, then the energy density of theinflaton provides for the required exponential expansion. For a particle physicist, atleast two questions arise: first, what is the nature of the inflaton, and second, whatwas the reason for the inflation to stop and not to continue until now.To summarize, we note that a large number of observations related to the structureand evolution of the Universe cannot be explained if the particle physics is describedby SM only: one needs to introduce new particles and interactions. Jointly with theobservation of neutrino oscillations, these facts constitute the experimental basis forthe confidence in incompleteness of SM. At the same time, presently none of theseexperimental results point to a specific model of new physics, so one is guided also bypurely theoretical arguments when constructing hypothetical models. Results of high-precision measurements of electroweak-theory parameters, in particularat the LEP accelerator, confirm the predictions of SM, based on the Higgs mechanism.At the same time, the only SM particle which has not been discovered experimentally,is the Higgs boson. Its mass is a free parameter of the model and is not directly relatedto any of measurable parameters, so the lack of signs of the Higgs boson in data maybe simply explained by its mass: the energies and luminosities of available acceleratorsmight be insufficient to create this particle with a significant probability.At the same time, purely theoretical concerns suggest that the Higgs boson shouldnot be too heavy. It is related to the fact that, without the account of the Higgs scalar,the scattering amplitudes of massive W bosons grow as E with energy E . As a result,at energies somewhat higher than the W mass, the perturbation theory fails and allmodel predictions start to depend on unknown higher-order contributions; the theoryfinds itself in a strong-coupling regime and loses predictivity. The contribution fromthe Higgs boson, however, cancels the part of the amplitude which grows with energy,so only the constant term remains, ∼ g M H / (4 M W ), where M H and M W are massesof the Higgs and W bosons, respectively, and g is the SU (2) L gauge coupling constant.Therefore, to keep calculability, M H should not be too large; a quite reliable limit is M H (cid:46)
800 GeV. Even more restrictive limits come from the radiative corrections tothe potential of the Higgs boson itself. In the leading order of perturbation theory, theself-interaction constant of the Higgs boson has a pole at the energy scale Q ∼ v exp (cid:20) π v M H (cid:21) . igure 12: The Higgs boson mass expected from indirect data and constrained from directsearches (see text). The left panel shows all experimental limits; the right one is azoom of the most interesting region, M H <
200 GeV.
This means that at energies Λ ≤ Q , the contributions of new particles or interactionsshould change the coupling behaviour to avoid divergence. The requirement Λ ≥ M H (cid:46)
550 GeV. Note that this means that the SM Higgs bosonshould be discovered at LHC.Maybe even more interesting situation is related to the experimental data on thesearch of the Higgs particle. The latter may reveal itself not only directly, being pro-duced at colliders, but also indirectly, through the influence of the virtual Higgs bosonson numerous observables. Though this influence is not large, a numebr of electroweakobservables are known with a very high precision, so that their joint analysis may con-strain the mass of yet undiscovered Higgs particle. Let us look at Fig. 12, which is basedon the analysis of indirect experimental data as of September 2011. The horizontalaxis gives the possible Higgs-boson mass; the shaded regions of M H are excluded, asof December 2011, from direct experimental search of the Higgs boson at colliders atthe 95% confidence level (the light band M H <
114 GeV – LEP [104], the light bands114 GeV < M H < . < M H <
600 GeV – LHC [105, 106], the darkbands 100 GeV < M H <
109 GeV and 156 GeV < M H <
177 GeV – Tevatron [107]).The curve [108] demonstrates how well a given value of M H agrees with a combinationof all other than the direct-search experiments (the lower ∆ χ , the better agreement;the curve width represents the uncertainty in theoretical predictions). One can seethat the most preferable value of M H is already experimentally excluded! Clearly, thisdoes not mean a catastrophe because a narrow range of slightly less preferable valuesare allowed, but it motivates theoretical physicists to think about possible alternativeexplanations of the electroweak symmetry breaking [109]. One should note that it israther difficult to discover a light, 115 GeV < M H <
127 GeV, Higgs boson at LHC:unlike for a heavy one, several years of work might be required.The lack of the Higgs boson with the expected mass and the prospect of furtherrestriction of the allowed mass region at LHC are important, but far not principalarguments in favour of alternative theoretical models of the electroweak symmetrybreaking, whose history goes back for decades. The point is that the Higgs bosonis the only SM scalar particle (all others are either fermions or vectors). A scalarparticle brings to a theory a number of unfavoured properties some of which we havejust mentioned above, while others will be discussed below. That is why alternativemechanisms of the electroweak symmetry breaking use, as a rule, only fermionic and See also the regularly updated webpage at http://gfitter.desy.de/GSM . SU (3) C , may re-sult in confinement of fermions and to formation of bound states (in QCD these arehadrons, bound states of quarks). In fact, in QCD a nonzero vacuum expectation valueof the quark condensate also appears, but its value, of order Λ QCD ∼
200 MeV, is muchless than the required electroweak symmetry breaking scale ( v ≈
246 GeV). There-fore one postulates that there exists another gauge interaction, in a way resemblingQCD, but with a characteristic scale of order v . The corresponding gauge group G TC is called a technicolor group. The bound states, technihadrons, are composed from thefundamental fermions, techniquarks T , which feel this interaction. The techniquarkscarry the same quantum numbers as quarks, except instead of SU (3) C , they transformas a fundamental representation of G TC . Then, the vacuum expectation value (cid:104) ¯ T T (cid:105) breaks SU (2) L × U (1) Y → U (1) EM in such a way that the correct relation betweenmasses of the W and Z bosons is fulfilled automatically. A practical implementationof this beautiful idea faces, however, a number of difficulties which result in a compli-fication of the model. First, the role of the Higgs boson in SM is not only to break theelectroweak symmetry: its vacuum expectation value also gives masses to all chargedfermions. Attempts to explain the origin of fermion masses in technicolor models re-sult in significant complication of the model and, in many cases, in contradiction withexperimental constraints on the flavour-changing processes. Second, many parametersof the electroweak theory are known with very high precision (and agree with the usualHiggs breaking), while even a minor deviation from the standard mechanism destroysthis well-tuned picture. To construct an elegant and viable technicolor model is a taskfor future which will become relevant if the Higgs scalar will not be found at LHC.In another class of models (suggested in [111] and further developed in numerousworks which are reviewed, e.g., in [109]), the Higgs scalar appears as a component of avector field. Since the vacuum expectation value of a vector component breaks Lorentzinvariance, this mechanism works exclusively in models with extra space dimensions.For instance, from the four-dimensional point of view, the fifth component of a five-dimensional gauge field behaves as a scalar, and giving a vacuum expectation valueto it breaks only five-dimensional Lorentz invariance while keeping intact the observedfour-dimensional one. Symmetries of the five-dimensional model, projected onto thefour-dimensional world, protect the effective theory from unwanted features related tothe existence of a fundamental scalar particle. These models also have a number ofphenomenological problems which can be solved at a price of significant complicationof a theory.The so-called higgsless models [112] (see also [109]) are rather close to these multi-dimensional models, though differ from them in some principal points. The higgslessmodels are based on the analogy between the mass and the fifth component of mo-mentum in extra dimensions: both appear in four-dimensional effective equations ofmotion similarly. In the higgsless models, the nonzero momentum appears due to im-posing some particular boundary conditions in a compact fifth dimension. In the end,25hese boundary conditions are responsible for breaking of the electroweak symmetry.Unlike in five-dimensional models, where the Higgs particle is a component of a vectorfield, the physical spectrum of the effective theory in higgsless models does not containthe corresponding degree of freedom. These models have some phenomenological diffi-culties (related e.g. to precise electroweak measurements). Another shortcoming of thisclass of models is considerable arbitraryness in the choice of the boundary conditions,which are not derived from the model but are crucial for the electroweak breaking.Finally, we note that a composite Higgs boson may be even more complex thanjust a fermion condensate: it may be a bound state which includes strongly coupledgauge fields. Description of these bound states requires a quantitative understandingof nonperturbative gauge dynamics. Taking into account the analogy between stronglycoupled four-dimensional theories and weakly coupled five-dimensional ones (which willbe discussed in Sec. 5.3, these models may even happen to be equivalent to multidi-mensional models described above. Each of the main interactions of particles has its own characteristic energy scale. Forthe strong interaction it is Λ
QCD ∼
200 MeV, the scale at which the QCD runningcoupling becomes strong; this scale determines masses of hadrons made of light quarks.The scale of the electroweak theory is determined by the vacuum expectation value ofthe Higgs boson, v ≈
246 GeV, which determines, through the corresponding couplingconstants, the masses of the W and Z bosons and of SM matter fields. For gravity,the characteristic scale is the Planck scale M Pl ∼ GeV, determined by the Newtonconstant of the classical gravitational interactions.These three scales are related to known forces. Extensions of SM give motivationto some other interactions and, consequently, to other scales. First of all it is M GUT ∼ GeV, the scale of the suggested Grand Unification of interactions. In severalmodels explaining neutrino masses there exists a scale M ν ; sometimes the scale M PQ ,related to the CP invariance of the strong interaction, is also introduced. Values ofthese two scales are model dependent but roughly M PQ ∼ M ν ∼ GeV.The gauge hierarchy problem (see also [2, 79, 113]) consists in the disproportionalityof these scales: (Λ
QCD , v ) (cid:28) ( M Pl , M GUT , M PQ , M ν )and in a range of related questions which may be divided into three groups.
1. The origin of the hierarchy : why the scales of the strong and electroweakinteractions are smaller than others by many orders of magnitude? That is, why, forinstance, all SM particles are practically massless at the gravity scales? It is possible, inthe frameworks of the Grand-Unification hypothesis, to get a reasonable explanation ofthe relation Λ
QCD (cid:28) M GUT . It is based on the logarithmic renormalization-group de-pendence of the gauge coupling constant from energy E . In the leading approximation,this dependence for the strong-interaction coupling α reads α ( E ) = α GUT β α GUT ln(
E/M
GUT ) , where β is a positive coefficient which depends on the set of strongly interacting matterfields (in SM, β = 11 / (12 π )), while α GUT ∼ /
30 is the value of the coupling constantof a unified gauge theory at the energy scale ∼ M GUT . The scale Λ
QCD , where α igure 13: Hierarchy of scales of gauge interactions. becomes large, may be determined in this approximation asΛ
QCD = M GUT exp (cid:18) − β α GUT (cid:19) and the exponent provides for the required hierarchy. However, a similar analysis isnot succesful for the electroweak interaction, whose coupling constants are small at thescale v . The latter is unrelated to any dynamical scale and is introduced in the theoryas a free parameter.
2. The stability of the hierarchy.
In the standard mechanism of the electroweakbreaking, the characteristic scale v = M H / √ λ , where λ is the self-interaction constantof the Higgs boson. Together with M H , the scale v gets, in SM, quadratically divergentradiative corrections, δv ∼ δM H = f ( g )Λ , where f ( g ) is a symbolic notation for some known combination of the coupling constants(in SM, f ( g ) ≈ . UV is the ultraviolet cutoff which may be interpreted as anenergy scale above which SM cannot give a good approximation to reality. This scalemay be related to one of the scales M Pl , M GUT etc. discussed above; in the assumptionof the absence of the “new physics”, that is of particle interactions other than thosealready discovered (SM and gravity), one should take Λ UV ∼ M Pl . Therefore, since v = v − δv , where v is the parameter of the tree-level lagrangian, the hierarchy v (cid:28) M appears as a result of cancellation between two huge contributions, v and δv . Each of them is of order f ( g ) M ∼ v , that is the cancellation has to beprecise up to 10 − in every order of the perturbation theory. This fine tuning ofparameters of the model, though technically possible, does not look natural. One mayrevert this logic and say that to avoid fine tuning in SM one should have f ( g )Λ ∼ v ⇒ Λ UV ∼ TeV . (3)The relation (3) gives a base for the optimism of researchers who expect the discoveryof not only the Higgs boson but also some new physics beyond SM from the LHC.
3. The gauge desert.
The third aspect of the same problem is related to thepresumed absence of particles with masses (and of interactions with scales) between“small” (Λ
QCD , v ) and “large” ( M ν , M GUT , M Pl ) energetic scales, see Fig. 13. All knownparticles are settled in a relatively narrow region of masses (cid:46) v , beyond which, for manyorders of magnitude, lays the so-called gauge desert . Clearly, one may suppose that the27eavier particles simply cannot be discoverd due to insufficient energy of accelerators,but this suggestion is not that easy to accomodate within the standard approach.Indeed, new relatively light ( ∼ v ) particles which carry the SM quantum numbers areconstrained by the electroweak precision measurements. Also, the latest Tevatron andfirst LHC results on the direct search of new quarks strongly constrain the range oftheir allowed masses (see [114] and references therein). In particular, for the fourthgeneration of matter fields similar to the known three, the mass of its up-type quarkshould exceed 338 GeV, while that of a down-type quark should exceed 311 GeV. Themass of the corresponding charged lepton cannot be lower than 101 GeV [6]. The massof the fourth-generation standard neutrino should exceed one half of the Z -boson mass,as it has been already discussed above. At the same time, these values of masses of thefourth-generation charged fermions cannot have the same origin as those for the firstthree generation, because to generate masses much larger than v , Yukawa constantsmuch larger than one are required. Since the methods to calculate nonperturbativecorrections to masses are yet unknown, for this case one cannot be sure that thesemasses can be obtained at all in a usual way. Moreover, SM fermion masses exceedingthe electroweak breaking scale are forbidden by the SU (2) L × U (1) Y gauge symmetry:a mechanism generating these masses would also break the electroweak symmetry ata scale > v . Addition of matter fields which do not constitute full generations maybe considered as an essential extension of SM. Finally, addition of new matter affectsthe energy dependence of the gauge coupling constants and spoils their perturbativeunification (unless one adds either full generations or other very special sets of particlesof roughly the same mass which constitute full multiplets of a unified gauge group).We see that attempts “to inhabit the gauge desert” inevitably result in significant stepsbeyond SM while the desert itself does not look natural.Attempts to solve the gauge hierarchy problem may be also divided into severallarge groups. The most radical approach, rather popular in recent years, is to assume thatthe high-energy scales are simply absent in Nature. For a theoretical physicist, themost easy scales to refuse are M ν and M PQ , because they do not appear in all modelsexplaining neutrino masses and CP conservation in strong interactions, respectively. M GUT is a bit more difficult: the Grand Unification of interactions gets support notonly from aesthetic expectations (electricity and magnetism unified to electrodynamics,electrodynamics and weak interactions unified to the electroweak theory, etc.) and thearguments related to the electric charge quantization (see e.g. [3]), but also from theanalysis of the renormalization-group running of the three SM gauge coupling constantswhich get approximately the same value at the scale M GUT (see e.g. [2, 3]). It is worthnoting that at the plot (see Fig. 14) of α , , as functions of energy in SM, the threelines do not intersect at a strictly one point, however, for the case of evolution for manyorders of magnitude in the energy scale, already an approximate unification is a surprise.To make three lines intersect at one point precisely, one needs a free parameter, whichmay be introduced in the theory with some new particles, e.g. with masses ∼ TeV (thishappens in particular in models with low-energy supersymmetry, see below). Therefore,the most amusing is not the precise unification of couplings in an extended theory withadditional parameters but the approximate unification already in SM. It is not thateasy to keep this miraculous property and at the same time to lower the M GUT scale inorder to avoid the hierarchy v (cid:28) M GUT . Indeed, the addition of new particles whichaffect the renormalization-group evolution either spoils the unification or, to the leadingorder, does not change the M GUT scale (note that in SM, the unification occurs in theperturbative regime and higher corrections does not change the picture significantly).28 igure 14:
The energy-scale dependence of coupling constants of SM gauge interactions U (1) Y (the full line), SU (2) W (the dashed line) and SU (3) C (the dotted line) in theleading order. The only possibility is to give up the perturbativity (the so-called “strong unification”[115, 116]). In this latter approach, the addition of a large number of new fields infull multiplets of a certain unified gauge group results in increasing of the couplingconstants at high energies; QCD stops to be asymptotically free at energies higherthan the masses of the new particles. In the leading order, all three coupling constantshave, in this case, poles at high energies; the unification of SM couplings guaranteesthat the three poles coincide and are located at M GUT . However, this leading-orderapproximation has nothing to do with the real behaviour of constants in the strong-coupling regime, so the theory may generate a new scale M s at which α , , becomestrong, this scale being an ultraviolet analog of Λ QCD . For a sufficiently large numberof additional matter fields, M s may be sufficiently close to the electroweak scale v :in certain cases, it might be that M s (cid:28) M GUT (a nonperturbative fixed point). Inthis scenario, low-energy observable values of the coupling constants appear as infraredfixed points and do not depend on unknown details of the strong dynamics. Note thatthe Grand-unified theory may have degrees of freedom very different from SM in thiscase.In the recent decade, the models became quite popular in which the hierarchyproblem is solved by giving up the large parameter M Pl . This parameter is relatedto the gravitational law and any attempt to change the parameter requires a changein the Newtonian gravity. This may be achieved, for instance, if the number of spacedimensions exceeds three but, for some reason, the extra dimensions remain unseen(see e.g. a review [117]). Indeed, assume that the extra dimensions are compact andhave a characteristic size ∼ R , where R is sufficiently small. Then it is easy to obtainthe relation M ∼ R δ M δ Pl , δ , (4)where δ is the number of extra space dimensions, M Pl , δ is the fundamental param-eter of the (4 + δ )-dimensional theory of gravity, while M Pl now is the effective four-dimensional Planck mass. Already in the beginning of the past century, in works byKalutza [118], subsequently developed by Klein [119], possible existence of these extra29imensions, unobservable because of small R , has been discussed. This approach as-sumed that R ∼ /M Pl (and therefore M Pl ∼ M Pl , δ ) and has become well-known andpopular in the second part of the 20th century in context of various models of stringtheory, which however did not result in succesful phenomenological applications bynow. We will discuss, in a little more detail, another approach which allows to make R larger without problems with phenomenology. It is based on the idea of localization ofobserved particles and interactions in a 4-dimensional manifold of a (4 + δ )-dimensionalspacetime [120, 121, 122].From the field-theoretical point of view, the localization of a (4 + δ )-dimensionalparticle means that the field describing this particle satisfies an equation of motionwith variables related to the observed four dimensions (call them x µ , µ = 0 , , , δ extra dimensions ( z A , A = 1 , . . . δ ) and the solutionfor the z -dependent part is nonzero only in a vicinity (of the size ∼ ∆) of a givenpoint in the δ -dimensional space (without loss of generality, one may consider the point z = 0), while the x -dependent part satisfies the usual four-dimensional equations ofmotion for this field. As a result, the particles described by the field move along thefour-dimensional hypersurface corresponding to our world and do not move away fromit to the extra dimensions for distances exceeding ∆. This may happen if the particlesare kept on the four-dimensional hypersurface by a force from some extended objectwhich coincides with the hypersurface. This solitonlike object is often called brane,hence the expression “braneworld”. The readers of Physics Uspekhi may find a moredetailed description of this mechanism in [117].Based on the topological properties of the brane, localisation of light (massless inthe first approximation) scalars and fermions in four dimensions implies that manydirect experimental bounds on the size of extra dimensions in a Kalutza-Klein-likemodel restrict now the region ∆ accessible for the observed particles, instead of thesize R of the extra dimension. In [124], it has been suggested to use this possibility,for R (cid:29) ∆, to remove, according to Eq. (4), a large fundamental scale M Pl and thecorresponding hierarchy. It has been pointed out that in this class of models, R isbound from above mostly by nonobservation of deviations from the Newtonian gravityat short distances; experiments now exclude the deviations at the scales of order 50 µ monly [125] (it was ∼ M Pl , δ ∼ TeV, that is almost of the same orderas v . Models of this class are well studied from the phenomenological point of viewbut have two essential theoretical drawbacks. The first one is related to the apparentabsence of a reliable mechanism of localization of gauge fields in four dimensions. Theonly known field-theoretical mechanism for that [126] is based on some assumptionsabout the behaviour of a multidimensional gauge theory in the strong-coupling regime.Though these assumptions look realistic, they currently cannot be considered as well-justified. The second difficulty is aesthetic and is related to the appearence of a newdimensionful parameter R : the hierarchy v (cid:28) M Pl happens to be simply reformulatedin terms of a new unexplained hierarchy 1 /R (cid:28) M Pl , δ .To a large extent, these difficulties are overcome in somewhat more complicatedmodels, in which the spacetime cannot be presented as a direct product of our four-dimensional Minkowski space and compactified extra dimensions [127, 128, 129]. Theprincipal difference of this approach from the one discussed above is that the gravita-tional field of hte brane in extra dimensions is not neglected. For δ = 1 and in the limit Note that recently, a fully analogous mechanism of localisation in one- or two-dimensional manifoldshas been tested experimentally for a number of solid-state systems (the quatum Hall effect, topologicalsuperconductors and topological insulators, graphene), see e.g. [123].
30f a thin brane, one obtains the usual five-dimensional general-relativity equations.These equations have, in particular, solutions with four-dimensional Poincare invari-ance. The metrics in these solutions is exponential in the extra-dimensional coordinate(the so-called anti-de-Sitter space), ds = exp( − k | z | ) dx − dz (5)where ds and dx are the squares of the five-dimensional and usual four-dimensional(Minkowski) intervals, respectively. For a finite size z c of the fifth dimension, therelation between the fundamental scales is now M Pl ∼ exp( kz c ) M Pl , . If fundamental dimensionful parameters of the five-dimensional gravity satisfy M Pl , ∼ k ∼ v , one may [129] explain the hierarchy v/M Pl for z c ≈ /k , that is instead of thefine tuning with the precision of 10 − , one now needs to tune the parameters up to ∼ .
1. It is interesting that in models of this kind with two or more extra dimensions,it is possible [130] to localize gauge fields on the brane in the weak-coupling regime,contrary to the case of the factorizable geometry. A completely different approach to the problem of stabilization of the gaugehierarchy is to add new fields which cancel quadratic divergencies in expressions forthe running SM parameters. The best-known realization of this approach is based onsupersymmetry (see e.g. reviews [131, 132, 133]), which provides for the cancellation ofdivergencies due to opposite signs of ferminonic and bosonic loops in Feynman diagrams.The requirement of supersymmetry is very restrictive for the mass spectrum ofparticles described by the theory. Namely, together with the observed particles, theirsuperpartners, that is particles with the same masses and different spins, should bepresent. The absence of scalar particles with masses of leptons and quarks and offermions with masses of gauge bosons means that unbroken supersymmetry does notexist in Nature. It has been shown, however, that it is possible to break supersymmetrywhile keeping the cancellation of quadratic divergencies. This breaking is called “soft”and naturally results in massive superpartners.In the minimal supersymmetric extension of SM (MSSM; see e.g. [133]), each ofthe SM fields has a superpartner with a different spin: the Higgs boson corresponds toa fermion, higgsino; matter-field fermions correspond to scalar squarks and sleptons;gauge bosons correspond to fermions which transform in the adjoint representation ofthe gauge group and are called gauginos (in particular, gluino for SU (3) C , wino andzino for the W and Z bosons, bino for the hypercharge U (1) Y and photino for theelectromagnetic gauge group U (1) EM ). For the theory to be selfconsistent (absenceof anomalies related to the higgsino loops), and also to generate fermion masses in asupersymmetric way, the second Higgs doublet is introduced, which is absent in SM.The cancellation of quadratic divergencies may be easily seen in Feynman diagrams:in the leading order, closed fermion loops have the overall minus sign and cancel thecontributions from loops of their superpartner bosons. This cancellation is preciseas long as the masses of particles and their superpartners are equal; otherwise thecontributions differ by an amount proportional to the difference between squared massesof superpartners, ∆ m . The condition of stability of the gauge hierarchy then requiresthat g π ∆ m (cid:46) v , where g is the coupling constant in the vertex of the correspondingloop (the maximal, g ∼
1, coupling constant is that of the top quark). We arrive to animportant conclusion which motivates in part the current interest to phenomenologicalsupersymmetry: if the problem of stabilization of the gauge hierarchy is solved by31upersymmetry, then the superpartner masses cannot exceed a few TeV, which meansthat they might be experimentally found in the nearest future.The MSSM lagrangian, in the limit of unbroken supersymmetry, satisfies all symme-try requirements of SM, including the conservation of the lepton and baryon numbers.At the same time, the SM gauge symmetries do not forbid, for this set of fields, certaininteraction terms which violate the lepton and baryon numbers. The coefficients atthese terms should be very small in order to satisfy experimental constraints, for in-stance, those related to the proton lifetime. It is usually assumed that these terms areforbidden by an additional global symmetry U (1) R . When supersymmetry is broken,this U (1) R breaks down to a discreet Z symmetry called R parity. With respect to the R parity, all SM particles carry charges +1 while all their superpartners carry charges −
1. The R -parity conservation leads to the stability of the lightest superpartner (seeSec. 3.2).The soft supersymmetry-breaking terms are introduced in the MSSM lagrangianexplicitly. They include usual mass terms for gaugino and scalars as well as trilinearinteractions of the scalar fields. In addition to the SM parameters, about 100 indepen-dent real parameters are therefore introduced. In general, these new couplings witharbitrary parameters may result in nontrivial flavour physics. The absence of flavour-changing neutral currents and of processes with nonconservation of leptonic quantumnumbers, as well as limits from the CP violation, narrow the allowed region of theparameter space significantly.One may note the following characteristic features of the phenomenological super-symmetry.(1). The coupling-constant unification at a high energy scale becomes more preciseas compared to SM, if superpartners have masses ∼ v as required for the stability ofthe gauge hierarchy.(2). In the same regime, the gauge desert between ∼ GeV and ∼ GeV isstill present.(3). In MSSM, there is a rather restrictive bound on the mass of the lightestHiggs boson. In the leading approximation of perturbation theory, it is M H < M Z .The account of loop corrections allow to relax it slightly, but in most realistic models M H <
150 GeV is predicted. The absence of a light Higgs boson discussed in Sec. 4.1is a much more serious problem for supersymmetric theories than for SM.(4). The phenomenological model described above explains the stability of thegauge hierarchy but not its origin. The small parameter v/M , where M = M GUT or M = M Pl , does not require tuning in every order of perturbtion theory but should beintroduced in the model by hand, that is cannot be derived, nor expressed througha combination of numbers of order one. At the same time, if the supersymmetrybreaking is moderate, as required to solve the quadratic-divergency problem, it may beexplained dynamically and related to nonperturbative effects which become importantat a characteristic scale of Λ ∼ exp (cid:0) − O (cid:0) /g (cid:1)(cid:1) M, where g is some coupling constant. If g is small, then the supersymmetry breakingscale is also small, Λ (cid:28) M . In a number of realistic models it is possible to get,up to powers of the coupling constants, v ∼ Λ dynamically (by means of radiativecorrections) and to explain therefore the origin of the gauge hierarchy. However, inthe MSSM frameworks, there is no place for nonperturbative effects of the requiredscale: these effects are relevant only for QCD and with Λ ∼ Λ QCD (cid:28) v . The dynamicalsupersymmetry breaking should take place in a new sector, introduced expressly for this32 igure 15: Constraints on the MSSM parameters [134] in one popular scenario (see text).The allowed region is the narrow strip which can be seen in the zoomed panel. purpose and containing a new strongly coupled gauge theory with its own set of matterfields. No sign of this sector is seen in experiments and one consequently supposes thatthe interaction between the SM (or MSSM) fields and this sector is rather weak andbecomes significant only at high energies, unreachable in the present experiments. Thisinteraction is responsible for the soft terms, that is for mediation of the supersymmetrybreaking from the invisible sector to the MSSM sector. One distinguishes the gravitymediation (at Planck energies) and the gauge mediation of supersymmetry breaking.Gravity-mediated and gauge-mediated models have quite different phenomenology.We see that MSSM, with addition of a sector which breaks supersymmetry dy-namically and of a certain interaction between this hidden sector and the observablefields, may explain the origin and stability of the gauge hierarchy, if the masses of su-perpartners are not very high ( (cid:46)
TeV). Note that the searches for supersymmetry inaccelerator experiments put serious constraints on the low-energy supersymmetry. Al-ready the fact that superpartners have not been seen at LEP implied that a significantpart of the theoretically allowed MSSM parameter space was excluded experimentally.Subsequent results of Tevatron and especially the first LHC data squeeze the allowedregion of parameters significantly, so that for “canonical” supersymmetry, only a verynarrow and not fully natural region of possible superpartner mass remains allowed. InFig. 15, theoretical and experimental (as of summer 2011) constraints on the MSSMparameters are plotted for one rather natural and popular scenario of gravity-mediatedsupersymmetry breaking. The masses of all scalar superpartners at the M GUT energyscale are equal to m in this scenario, while masses of all fermionic superpartners are M / . Their ratios to the supersymmetric mixing matrix of the Higgs scalars, µ , aregiven in the plot. In a scenario which explains the gauge hierarchy, the MSSM param-eters and the Z -boson mass should be of the same order; for instance, in the modelwhich corresponds to the illustration, the following relation holds, M Z (cid:39) . m + 1 . M / − µ . The LHC bound, M / (cid:38)
420 GeV, results in the requirement of not fully natural33ancellations since 1 . M / (cid:38) M Z . Together with the absence of a light Higgs bosondiscussed in Sec. 4.1, this “little hierarchy” problem makes the approach based on su-persymmetry less motivated than it looked some time ago, though there exist variationsof supersymmetric models where this dificulty is overcome. The Higgs field may be a pseudo-Goldstone boson. The Goldstone theorem guar-antees a massless (even with the account of radiative corrections!) scalar particle foreach generator of a broken global symmetry. A weak explicit violation of this symmetryallows to give a small mass to this scalar to get the so-called pseudo-Goldstone boson.The same mechanism results in a low but nonzero mass of some composite particlesin a strongly-interacting theory (for instance, of the π meson). A direct application ofthis approach to the Higgs boson is not possible because the interaction of a pseudo-Goldstone particle with other fields contains derivatives and is very different from theSM interactions. Realistic models of this kind with large coupling constants and withinteractions without derivatives, at the same time free from quadratic divergencies,are called the “Little Higgs models” (see e.g. [135] and references therein). Diagramby diagram, the absence of quadratic divergencies occurs due to complicated cancella-tions of contributions of a number of particles with masses of order TeV, in particularof additional massive scalars. Note that to reconcile a large number of new particleswith experimental constraints, in particular with those from the precision electroweakmeasurements, the model requires significant complications. Composite models: besides the Little Higgs models, a composite Higgs scalar isconsidered in a number of other constructions, see e.g. [136]. In some rather popularmodels with composite quarks and leptons, the SM matter fields, together with theHiggs boson (or even without it) represent low-energy degrees of freedom of a stronglycoupled theory, like hadrons may be considered as low-energy degrees of freedom ofQCD. The mass scales of the theory, v in particular, are determined by the scale Λat which the running coupling constant of the strongly-coupled theory becomes large,analogously to Λ QCD . The hierarchy Λ (cid:28) M Pl is now determined by the evolutionof couplings in the fundamental theory. These models generalize, to some extent, thetechnicolour models, having more freedom in its construction at the price of even morecomplications in the quantitative analysis. Note that (at least) in some supersymmetricgauge theories, low-energy degrees of freedom may include also gauge fields, so inprinciple, one may consider models in which all SM particles are composite (see e.g.[116, 136]). On the other hand, the correspondence between strongly coupled four-dimensional models and weakly-coupled five-dimensional theories (see Sec. 5.3) mayopen prospects for a quantitative study of composite models. It might even happen thatthe approaches to the gauge-hierarchy problem, based on assumptions of the extra spacedimensions, are equivalent to the approaches which invoke strongly coupled compositemodels. As in other approaches, to explain the hierarchy, the scale Λ should not exceedsignificantly the electroweak scale v , so that the LHC constraints on compositeness ofquark and leptons (roughly Λ (cid:38) (4 . . .
5) TeV) may again be problematic.
Conclusion.
All known scenarios which explain the origin and stability of thegauge hierarchy without extreme fine tuning, predict new particles and/or interactionsat the energy scale not far above the electroweak scale. Absence of experimental signs ofthese particles, especially with the account of the first LHC data, questions the abilityof these scenarios to solve the hierarchy problem. If the LHC finds the Higgs scalarbut will not confirm predictions of any of the models discussed above, nor will findsigns of some other, yet not invented, mechanism, then one would have to reconciderthe question of the naturalness of the fine tuning. A principally different position,based on the anthropic principle, is seriously discussed but lays beyond the scope of34 igure 16:
Masses of the charged SM fermions. The area of each circle is proportional to themass of the corrsponding particle. our consideration.
As it has already been pointed out, the SM fermionic fields, quarks and leptons, com-prise three generations, that is three sets of particles with identical interactions butwith very different masses (see Fig. 16 for a pictorial illustration). The hierarchy ofthese masses is one of the biggest puzzles of particle physics. Indeed, for instance,the electron ( m e = 0 .
511 MeV), the muon ( m µ = 105 . m τ = 1777 MeV) carry identical gauge quantum numbers. For quarks, it is con-venient to determine the mass matrix whose diagonal elements determine the massesof the quarks of three generations with identical interactions while combinations ofnon-diagonal elements provide for the possibility of mixing between generations. Thehierarchical structure appears both in the diagonal elements (which differ by orders ofmagnitude) and in the off-diagonal ones (the mixing is suppressed). In the SM frame-works, neutrino are strictly massless and the mixing of charged leptons is absent, butthe same hierarchical structure is seen in the set of masses of charged leptons.As we have discussed in Sec. 2, the experiments of the past decade not only estab-lished confidently the fact of the neutrino oscillations (pointing therefore to nonzeroneutrino masses and giving the first laboratory indication to the incompleteness ofSM) but also opened the possibility of a quantitative study of neutrino masses and ofthe mixing in the leptonic sector. It is interesting that the neutrino masses and theleptonic mixings also have the hierarchical structure, but it is very different from thecorresponding hierarchy in the quark sector: contrary to the suppressed quark mix-ings, the leptonic mixing is maximal; the hierarchy of neutrino masses is at the sametime moderate. A modern theory which succesfully explains the fermion masses should35 igure 17: A model with extra space dimensions which explains the mass hierarchy. motivate both hierarchical structures and explain why they are different.Meanwhile, even without the neutrino sector, the intergeneration mass hierarchyis very difficult to explain. A natural idea is to suppose that there is an extra globalsymmetry which relates the fermionic generations to each other and which is sponta-neously broken; however, this approach is not succesful because it implies the existenceof a massless Goldstone boson, the so-called familon, whose parameters are strictlyconstrained by experiments [6].A model of fermion masses should explain only the origin of the hierarchy: itsstability is provided automatically by the fact that all radiative corrections to thefermion-Higgs Yukawa constants, to which the fermion masses are proportional, dependon the energy logarithmically, that is weakly; this does not, however, make the issuesignificantly less complicated.An explanation of the hierarchy may be obtained in a model with extra space dimen-sions (Fig. 17), in which a single generation of particles in six-dimensional spacetimeeffectively describes three generations in four dimensions [137, 138]. Each multidimen-sional fermionic field has three linearly independent solutions which are localized onthe four-dimensional hypersurface and have different behaviour close to the brane. De-noting as r, θ the polar coordinates in two extra dimensions and considering the braneat r = 0, one gets for the three solutions at r → u ∼ const = r e i θ , u ∼ r e i θ , u ∼ r e i θ . The Higgs scalar has a vacuum expectation value v ( r ) which depends on r and isnonzero only in the immediate vicinity of the brane. The effective observable fermion36asses are proportional to the overlap integrals m i ∝ (cid:90) dr dθ v ( r ) | u i | ( r, θ )of the coordinate-dependent vacuum expectation value v and extra-dimensional partsof the fermionic wave functions which correspond to the three localized solutions( i = 0 , , m i are hierarchically different. Therefore, in this model the mass hierarchyfollows from the linear independence of eigenfunctions of the Dirac operator in a partic-ular external field. The same model automatically describes the required structure ofneutrino masses and mixings [139]. Presently, this model is the only one known in whichthe hierarchy of families of both charged fermions and neutrinos are obtained on thecommon grounds. Note that, contrary to other multidimensional models (e.g. [140]),in this model the number of free parameters is smaller than the number of parametersit describes.Compared to the hierarchy of masses of particles with identical interactions fromdifferent generations, the question of the difference of masses of particles within ageneration is much easier. For instance, the difference between masses of the τ leptonand the b and t quarks may be explained by different (because of different quantumnumbers) renormalization-group evolution of the Yukawa couplings, so that at theGrand-unification scale these constants are equal while at low energies they are different. In this section, we discuss the question about the practical applicability of the quantumfield theory to the description of interactions with large coupling constants, and inparticular to the low-energy limit of QCD. It would not be an exaggeration to saythat most of the theoretical achievements in the quantum field theory in the past twodecades were related to this problem. Before proceeding to the discussion of theseachievements, let us note that despite a significant progress, the problem of descriptionof strong interactions at low energies in terms of QCD is not solved, so the developmentof the corresponding methods remains one of the basic tasks of the quantum field theory.Recall that QCD, which describes the strong interaction at high energies, is a gaugetheory with the gauge group SU (3) C and N f = 6 fermions, quarks, which transformunder its fundamental representation, and the same number of antiquarks transformingunder the conjugated representation. A peculiarity of the model is that the asymptoticstates, in terms of which the quantum theory is constructed, do not coincide with thefundamental fields in terms of which the Lagrangian is written, that is with fermions(quarks) and gauge bosons (gluons). Contrary, the observable particles do not carry the SU (3) C quantum numbers (this phenomenon is called confinement). The observablestrongly interacting particles are hadrons, whose classification and interactions allowto interpret them as bound states of quarks. At the same time, the theory whichdescribes interaction of quarks, QCD, is unable to calculate properties of these boundstates. Intuitively, it seems possible to relate confinement and formation of hadronswith the energy dependence of the QCD gauge coupling constant which grows up withthe decrease in energy (that is with the increase in distance; the so-called asymptoticfreedom) and becomes large, α s ∼
1, at the scale Λ
QCD ∼
150 MeV: when the distance37 igure 18:
Electromagnetic pion formfactor [142]: experimental data versus theoretical cal-culations, perturbative (QCD, the dashed line) and nonperturbative (full linesrepresenting working models which are not derived from QCD). Up to the energyscale ∼ between quarks is increased, the force between them increases as well, and maybe thisforce binds them to hadrons. This picture is however not fully consistent because at α s (cid:38)
1, the perturbative expansion stops to work and the true energy dependence of thecoupling constant is unknown. Indeed, there exist examples of theories with asymptoticfreedom but without confinement [141].To understand the nature of confinement and to describe properties of hadronsfrom the first principles (and, in the end, to answer whether QCD is applicable to thedescription of hadrons), one require the methods of the field theory which do not makeuse of the expansion in powers of the coupling constants (non-perturbative methods).It is natural to assume (and it was assumed for a long time) that the perturbativeQCD has to describe well the physics of strong interactions at characteristic energiesabove few hundred MeV, because the coupling constant becomes large at ∼
150 MeV.A number of recent experimental results related to the measurement of the form factorsof π mesons question the applicability of perturbative methods at considerably highermomentum transfer (a few GeV). In general, formfactors are the coefficients by whichthe true amplitude of a process with composite or extended particles involved differsfrom the same amplitude calculated for point-like particles with the same interaction.These coefficients are determined by the internal structure of particles (for instance,by the distribution of the electric charge); their particular form depends on the processconsidered and on the value of the square of the momentum transfer, Q . A full theorydescribing the interaction which keeps the particles in the bound state should allowfor derivation of form factors from the first principles. The results of the experimentaldetermination of formfactors of π mesons related to various processes are given inFigs. 18, 19. One may see that the perturbative QCD experience some difficulties inexplaining the experiment at the momentum transfer (cid:46) igure 19: The transitional form factor of the π meson which describes the process π → γγ : experimental data [143] versus calculations of perturbative QCD. The QCDpredicts the behaviour Q F ( Q ) ∼ const (the horizonthal full line); at least up to (cid:112) Q ∼ Q F ( Q ) ∼ ( Q ) . (the dotted line). of an effective theory in terms of degrees of freedom which correspond to obsevrableparticles. In the latter case the main unsolved question is, as a rule, to justify theconnection of the effective theory to QCD. To some extent, a progress in this directionbecame possible within the concept of dual theories discussed below. The Feynman functional integral is a formally strict approach to the quantization offields, equivalent to other approaches in the domain of applicability of the perturbationtheory. It is natural to suppose that in the nonperturbative domain, this method alsoreproduces the results which would be obtained within the standard frameworks if themeans to get them existed. Numerical calculation of the functional integral is possiblein lattice calculations in which the continuous and infinite spacetime is replaced bya finite discrete lattice (see e.g. [144]). In modern calculations, the lattices 32 × u and s quarks are free parameters, the d -quark mass is assumed to be equal to that of the u quark and the contributions ofheavy c , b and t quarks are neglected. Besides these two parameters ( m u = m d and m s ),there is one more, the physical length which corresponds to a unit step of the lattice.To determine the masses of hadrons, these three parameters should be specified, so in39 igure 20: Results of the lattice calculations of the hadron masses. Masses of π , K and Ωmesons are taken as input parameters. The calculations have been performed inthe three-quark approximation, m u = m d (cid:54) = m s . The histogram gives the experi-mentally measured values of masses [6], the points (with the error-bar rectangles)represent the results of calculations [146]. real calculations one assumes that the masses of, say, π , K and Ω mesons are knownwhile all other masses and decay constants are expressed through them. One might tryto fix masses of heavier particles and to calculate those of the lightest ones, but for aconfident calculation of masses of light hadrons a large lattice is required. Currently,the mass of the π meson may be calculated only up to an order of magnitude in thisway.At high temperature, one expects a transition to the state in which quarks cannotbe confined in hadrons, that is a phase transition. In reality, these conditions appear innuclei collisions at high-energy colliders; probably, they also took place in a very earlyUniverse. By means of the lattice methods, the existence of this phase transition hasbeen demonstrated, its temperature has been defined and the dependence of the orderof the phase transition from the quark masses has been studied [147, 148].It is an open theoretical question to prove that the continuum limit of a latticefield theory exists (that is the physical results do not depend on the way in whichthe lattice size tends to infinity and the lattice step tends to zero) and coincides withQCD. It may happen that this proof is impossible in principle unless one finds analternative way to work with QCD at strong coupling. However, there exist a seriesof arguments suggesting that the lattice theory indeed describes QCD (first of all, itis the fact that the lattice calculations reproduce experimental results). At the sametime, theoretically, the difference between the lattice and continuum theories is large;for instance, topologically stable in the continuum theory configurations, instantons,which determine the structure of vacuum in nonabelian gauge theories, are not alwaysstable on the lattice; the lattice description of chiral fermions (automatic in a continuumtheory) requires complicated constructions etc. In the past two decades, in attempts to relate low-energy models of strong interactionsto QCD, theorists created a number of succesful descriptions of dynamics of theorieswith large coupling constants in terms of other theories, in which the perturbation40heory works. These theories, called dual to each other, have coupling constants g and g , for which g ∼ /g ; the knowledge of the Green functions of one theory allows tocalculate, following some known rules, the Green functions of another. Note that thetheory dual to QCD has not been constructed up to now.The simplest example of duality (see e.g. [149]) is a theory of the electromagneticfield with magnetic charges. The Maxwell equations in vacuum are invariant withrespect to the exchange of the electric field E and the magnetic field B : E (cid:55)→ B , B (cid:55)→ − E . (6)This duality breaks down in the presence of electic charges and currents. It maybe restored, however, if one assumes that sources of the other kind exist in Nature,namely magnetic charges and currents which correspond to their motion. The self-consistency of the theory requires the Dirac quantization condition: the unit electiccharge e and the unit magnetic charge ˜ e have to satisfy the relation e ˜ e = 2 π . Thecharge e is the coupling constant of the usual electrodynamics while the magneticcharge ˜ e is the coupling constant of the theory of interaction of magnetic charges whichis obtained from electrodynamics by the duality transformation (6). Therefore, theweak coupling of electric charges, e (cid:28)
1, corresponds to the strong coupling of magneticones, ˜ e = 2 π/e (cid:29) SU (2) supersymmetric theory with two supercharges ( N = 2)which is related to the names of Seiberg and Witten [150]. From the particle-physicspoint of view, this model is a SU (2) gauge theory with scalar and fermionic fieldstransforming under the adjoint representation of the gauge group, whose interaction isinvariant under special symmetry. For this model, the effective theory has been calcu-lated which describes the interaction of light composite particles at low energies andthe correspondence has been given between the effective low-energy and fundamentaldegrees of freedom. Like QCD, the fundamental theory is asymptotically free and isin the strong-coupling regime at low energies; the effective theory describes weaklyinteracting composite particles.The success of the Seiberg-Witten model gave rise to a hope that the low-energyeffective theory for a nonsupersymmetric gauge model with strong coupling, for instancefor QCD, may be obtained from the problem already solved by means of addition ofsupersymmetry-breaking terms to the lagrangians of both the fundamental and the dualtheories. The first step in this direction was to consider N = 1 supersymmetric gaugetheories. Earlier, starting from mid-1980s, a number of exact results have been obtainedin these theories by making use of (gouverned by supersymmetry) analitical propertiesof the effective action [151]. In contrast with the case of N = 2 supersymmetry,this is insufficient for the reconstuction of the full effective theory, but the modelsdual to supersymmetric gauge theories with different gauge groups and matter contenthave been suggested [152]. Contrary to the N = 2 case, it is impossible to prove theduality here, but the conjecture withstood all checks carried out. Moreover, it hasbeen shown that the addition of small soft breaking terms in the Lagrangians of N = 1theories corresponds to a controllable soft supersymmetry breaking in dual models [153].Unfortunately, one may prove that with the increase of the supersymmetry-breakingparameters (for instance, when superpartner masses tend to infinity, so the N = 141heory becomes QCD), a phase transition happens and the dual description stops towork, so the straightforward application of this approach to QCD is not possible [154].Also, it is worth noting that the approach does not allow for a quantitative descriptionof dynamics at intermediate energies, when the coupling constants of dual theories areboth large. Nevertheless, these methods themselves, as well as the physics intuitionbased on their application, have played an important role in the development of othermodern approaches to the study of dynamics of strongly-coupled theories.One of the theoretically most beautiful and practically most prospective approachesto the analysis of dynamics of strong interactions at low and intermediate energies isthe so-called holographic approach. Its idea is that the dual theories may be for-mulated in spacetime of different dimensions, in such a way that, for instance, thefour-dimensional dynamics of a theory with large coupling constant is equivalent to thefive-dimensional dynamics of another theory which is weakly coupled (in a way simi-lar to the two-dimensional description of a three-dimensional object with a hologram).The best-known realization of this approach is based on the AdS/CFT correspondence[155, 156], a practical realization of the duality between a strongly coupled gauge the-ory with a four-dimensional conformal invariance (CFT = conformal field theory) anda multidimensional supergravity with weak coupling constant. The four-dimensionalconformal symmetry includes the Poincare invariance supplemented by dilatations andinversions. An example of a nontrivial four-dimensional conformal theory with largecoupling constant g is the N = 4 supersymmetric Yang-Mills theory with the gaugegroup SU ( N c ) which, in the limit N c → ∞ , g N c (cid:29)
1, appears to be dynamicallyequivalent to a certain supergravity theory living on the ten-dimensional AdS × S manifold, where AdS is the (4+1)-dimensional space with the anti-de-Sitter metrics(5) and S is the five-dimensional sphere (the S factor is almost irrelevant in appli-cations, hence the name, AdS/CFT correspondense). In the limit considered, thesetwo models are equivalent. To proceed with phenomenological applications, one hasto break the conformal invariance. As a result, the theory has less symmetries, so theresults proven by making use (direct or indirect) of these symmetries are downgraded toconjectures. Nevertheless, this not fully strict approach (called sometimes AdS/QCD)brings interesting phenomenological results.An example is provided by a five-dimensional gauge theory determined at a finiteinterval in the z coordinate of the AdS space (other geometries of the extra dimensionsare also considered). For the SU (2) × SU (2) gauge group and a special matter set onegets the effective theory with QCD symmetries. The series of the Kalutza-Klein statescorresponds to the sequence of mesons whose masses and decay constants may thereforebe calculated directly in the five-dimensional theory. This approach was succesful; itallows to calculate various physical observables (in particular, the π -meson formfactordiscussed above) which agree reasonably with data. A disadvantage of the method isthat the duality between QCD and the five-dimensional effective theory is not proven.As a result, the choice of the latter is somewhat arbitrary. An undisputable advantageof this approach is its phenomenological success achieved without a large number oftuning parameters, as well as the possibility to calculate observables for intermediateenergies and not only in the zero-energy limit. One may hope that in the future, a low-energy effective theory for QCD might be derived in the frameworks of this approach.42 Conclusions.
The Standard model of particle physics gives an excellent description of almost alldata obtained at accelerators already for several decades. At the same time, results ofboth a number of non-accelerator experiments (neutrino oscillations) and astrophysicalobservations cannot be explained in the frameworks of SM and undoubtedly point toits incompleteness. A more complete theory, yet to be constructed, should allow fora derivation of the SM parameters and for explanation of their, theoretically not fullynatural, values. The main unsolved problem of SM itself is to describe the dynamics ofgauge theories at strong coupling which would allow to apply QCD to the descriptionof hadrons at low and intermediate energies.One may hope that in the next few years, the particle theory will get additionalexperimental information both from the Large Hadron Collider, a powerful accelera-tor which is bound to explore the entire range of energies related to the electroweaksymmetry breaking, and from numerous experiments of smaller scales (in particular,those studying neutrino oscillations, rare processes etc.) and astrophysical observa-tions. Possibly, this information will allow to construct a succesful extension of theStandard Model already in the coming decade.This work was born (and grew up) from a review lecture given by the author at thePhysics department of the Moscow State University. I am indebted to V. Belokurovwho suggested to convert this lecture into a printed text, read the manuscript carefullyand discussed many points. I thank V. Rubakov and V. Troitsky for attentive readingof the manuscript and numerous discussions, to M. Vysotsky and M. Chernodub foruseful discussions related to particular topics and to A. Strumia for his kind permissionto use Fig. 15. The work was supported in part by the RFBR grants 10-02-01406 and11-02-01528, by the FASI state contract 02.740.11.0244, by the grant of the Presidentof the Russian Federation NS-5525.2010.2 and by the “Dynasty” foundation.
References [1] Krasnikov N V, Matveev V A
New physics at the Large Hadron Collider
Moscow,Krasand, 2011 (in Russian); Krasnikov N V, Matveev V A
Phys. Usp. The standard model: A primer,
Cambridge UniversityPress, 2006[3] Cheng T P, Li L F
Gauge Theory Of Elementary Particle Physics,
Oxford, Claren-don, 1984[4] Gorbunov D S, Rubakov V A
Introduction to the theory of the early universe:hot big bang theory,
World Scientific, 2011[5] Kobayashi M
Phys. Usp.
12 (2009)[6] Nakamura K et al. [Particle Data Group]
J. Phys. G Fundamentals of Neutrino Physics and Astrophysics,
OxfordUniversity Press 2007[8] Bilenky S M
Phys. Usp. Phys. Usp.
117 (2004)[10] Kudenko Yu G
Phys. Usp.
549 (2011)[11] Evans J J arXiv:1107.3846 [hep-ex][12] Pontecorvo B
Sov. Phys. JETP
429 (1957)[13] Pontecorvo B
Sov. Phys. JETP
152 (1957)[14] Maki Z, Nakagawa M, Sakata S
Prog. Theor. Phys.
870 (1962)[15] Pontecorvo B
Sov. Phys. JETP , 984 (1968)[16] Gribov V N, Pontecorvo B Phys. Lett. B
493 (1969)[17] Bilenky S M, Pontecorvo B
Sov. J. Nucl. Phys.
316 (1976)[18] Bilenky S M, Pontecorvo B
Lett. Nuovo Cim.
569 (1976)[19] Eliezer S, Swift A R
Nucl. Phys. B
45 (1976)[20] Fritzsch H, Minkowski P
Phys. Lett. B
72 (1976)[21] Mikheev S P, Smirnov A Y
Sov. J. Nucl. Phys.
913 (1985)[22] Mikheev S P, Smirnov A Y
Nuovo Cim. C
17 (1986).[23] Wolfenstein L
Phys. Rev. D Phys. Rev. Lett. et al. [KAMIOKANDE-II Collaboration] Phys. Rev. Lett.
16 (1989)[26] Abazov A I et al. [SAGE Collaboration]
Phys. Rev. Lett. et al. [GALLEX Collaboration] Phys. Lett. B
127 (1999)[28] Ashie Y et al. [Super-Kamiokande Collaboration]
Phys. Rev. Lett. et al. [KAMIOKANDE-II Collaboration] Phys. Lett. B
416 (1988)[30] Casper D et al. Phys. Rev. Lett. et al. Phys. Lett. B
491 (1997)[32] Ambrosio M et al. [MACRO Collaboration]
Phys. Lett. B
451 (1998)[33] Fukuda Y et al. [Super-Kamiokande Collaboration]
Phys. Rev. Lett. et al. [SNO Collaboration] Phys. Rev. Lett. et al. [KamLAND Collaboration] Phys. Rev. Lett. et al. [SNO Collaboration] arXiv:1109.07634437] Bellini G et al. [Borexino Collaboration] arXiv:1104.1816 [hep-ex].[38] Ashie Y et al. [Super-Kamiokande Collaboration]
Phys. Rev. D et al. [Super-Kamiokande Collaboration] talk at Neutrino-2010,Athens, 14-19 June 2010.[40] Ahn M H et al. [K2K Collaboration] Phys. Rev. D et al. [MINOS Collaboration] Phys. Rev. Lett. et al. [The MINOS Collaboration] Phys. Rev. Lett. et al. [OPERA Collaboration]
Phys. Lett. B
138 (2010)[44] Fogli G L et al., Phys. Rev. Lett. et al. [T2K Collaboration]
Phys. Rev. Lett. et al. [MINOS Collaboration] arXiv:1108.0015 [hep-ex][47] Fogli G L et al., arXiv:1106.6028 [hep-ph][48] Aguilar A et al. [LSND Collaboration]
Phys. Rev. D et al., Phys. Rev. D et al. [The MiniBooNE Collaboration] Phys. Rev. Lett. , Geneva, 1–6 August 2011[52] Mention G et al. Phys. Rev. D et al. [MiniBooNE Collaboration] Phys. Rev. Lett.
Lepton-Photon 2011 , Mumbai, 22–27 August 2011[55] Abe K et al. [Kamiokande Collaboration] arXiv:1109.1621[56] Anselmann P et al. [GALLEX Collaboration.]
Phys. Lett. B
440 (1995);Kaether F et al. Phys. Lett. B
47 (2010)[57] Abdurashitov D N et al. [SAGE Collaboration]
Phys. Rev. Lett. et al. [SAGE Collaboration] Phys. Rev. C Phys. Rev. C Phys. Rev. D et al. [MiniBooNE Collaboration] Phys. Rev. Lett. et al. Phys. Lett. B
227 (1999)4562] Aguilar-Arevalo A A et al. [MiniBooNE Collaboration] [arXiv:1109.3480 [hep-ex]][63] Adam T et al. [OPERA Collaboration] arXiv:1109.4897 [hep-ex][64] Strumia A
Phys. Lett. B
91 (2002)[65] Maltoni M et al. Nucl. Phys. B
321 (2002)[66] Akhmedov E, Schwetz T
JHEP
115 (2010)[67] Murayama H, Yanagida T
Phys. Lett. B
263 (2001)[68] Bogolyubov N N, Shirkov D V
Introduction to the theory of quantized fields,
Intersci. Monogr. Phys. Astron. Phys. Usp.
825 (2005); Tsukerman I S arXiv:1006.4989 [hep-ph][70] Diaz J S, Kostelecky A arXiv:1108.1799 [hep-ph][71] Engelhardt N, Nelson A E, Walsh J R
Phys. Rev. D Phys. Rev. D et al. , arXiv:1108.5034 [hep-ex][76] Kraus C et al. Eur. Phys. J. C
447 (2005)[77] Hannestad S et al. JCAP
001 (2010)[78] Gorbunov D S, Rubakov V A
Introduction to the theory of the early universe,Cosmological perturbations and inflationalry theory , World Scientific, 2011[79] Rubakov V A
Phys. Usp. Phys. Usp.
633 (2011)[81] Sajharov A D
JETP Lett. ,
24 (1967)[82] Rubakov V A, Shaposhnikov M E
Phys. Usp.
461 (1996)[83] Rubin V C, Thonnard N, Ford W K
Astrophys. J.
471 (1980)[84] Begeman K G
Astron. Astrophys.
47 (1989)[85] The digitized sky survey (DSS), in [88].[86] Zwicky F
Astrophys. J.
217 (1937)[87] Limousin M et al. Astrophys. J. et al. Astrophys. J.
L109 (2006)[91] Bradac M et al. Astrophys. J.
937 (2006)[92] Alcock C et al. [MACHO Collaboration]
Astrophys. J.
281 (2000); TisserandP et al. [EROS-2 Collaboration]
Astron. Astrophys.
387 (2007)[93] Riess A G et al. [Supernova Search Team Collaboration]
Astron. J. (1998)1009[94] Perlmutter S et al. [Supernova Cosmology Project Collaboration]
Astrophys. J.
565 (1999)[95] Perlmutter S
Physics Today (4) 53 (2003)[96] Riess A G, Press W H, Kirshner R P Astrophys. J.
88 (1996)[97] Perlmutter S et al. [Supernova Cosmology Project Collaboration] Astrophys. J. (1997) 565[98] Amanullah R et al. Astrophys. J.
712 (2010); data for Fig. 11 are taken fromhttp://supernova.lbl.gov/Union/[99] Jullo E et al. Science
924 (2010)[100] Komatsu E et al. [WMAP Collaboration]
Astrophys. J. Suppl.
18 (2011)[101] Marinoni C, Buzzi A
Nature
539 (2010)[102] Khoury J, Weltman A
Phys. Rev. D Particle physics and inflationary cosmology,
Harwood Academic Pub-lishers, 1990[104] Barate R et al. [LEP Working Group for Higgs boson searches, ALEPH, DELPHI,L3 and OPAL Collaborations]
Phys. Lett. B
61 (2003)[105] Sharma V et al. [CMS Collaboration], talk at
Lepton-Photon 2011 , Mumbai,22–27 August 2011[106] Nisati A et al. [ATLAS Collaboration], talk at
Lepton-Photon 2011 , Mumbai,22–27 August 2011[107] Verzocchi M et al. [CDF and D0 Collaborations], talk at
Lepton-Photon 2011 ,Mumbai, 22–27 August 2011[108] Baak M et al. arXiv:1107.0975[109] Grojean C
Phys. Usp. Nucl. Phys. B
141 (1979)[112] Csaki et al. Phys. Rev. D Phys. Usp.
390 (2007)[114] Atwood D, Gupta S K, Soni A arXiv:1104.3874 [hep-ph][115] Ghilencea D, Lanzagorta M, Ross G G
Phys. Lett. B
253 (1997)[116] Rubakov V A, Troitsky S V arXiv:hep-ph/0001213[117] Rubakov V A
Phys. Usp.
211 (2003)[118] Kaluza T
Sitzungsber. Preuss. Akad. Wiss. Berlin , Math.-Phys. Kl. (1) 966 (1921)[119] Klein O
Z. Phys.
895 (1926)[120] Akama K
Lecture Notes Phys.
267 (1983)[121] Rubakov V A, Shaposhnikov M E
Phys. Lett. B
136 (1983)[122] Visser M
Phys. Lett. B
22 (1985)[123] von Klitzing K, Nobel lecture (1985); Fu L, Kane C L
Phys. Rev. Lett.
Phys.Rev. B The Universe in a helium droplet , Int.Ser. Monogr. Phys. (2006)[124] Arkani-Hamed N, Dimopoulos S, Dvali G
Phys. Lett. B
263 (1998)[125] Kapner D J et al. Phys. Rev. Lett. Phys. Lett. B
64 (1997)[127] Rubakov V A, Shaposhnikov M E
Phys. Lett. B
139 (1983)[128] Gogberashvili M
Mod. Phys. Lett. A Phys. Rev. Lett. Phys. Lett. B
113 (2000)[131] Vysotsky M I, Nevzorov R B
Phys. Usp.
919 (2001)[132] Gorbunov D S, Dubovsky S L, Troitsky S V
Phys. Usp.
623 (1999)[133] Kazakov D I arXiv:hep-ph/0012288[134] Strumia A
JHEP
073 (2011)[135] Schmaltz M, Tucker-Smith D
Ann. Rev. Nucl. Part. Sci.
229 (2005)[136] Gherghetta T arXiv:1008.2570 [hep-ph].[137] Libanov M, Troitsky S
Nucl. Phys. B
319 (2001)[138] Frere J-M, Libanov M, Troitsky S
Phys. Lett. B
169 (2001)[139] Frere J-M, Libanov M, Ling F S
JHEP
081 (2010)[140] Dvali G R, Shifman M A,
Phys. Lett.
B475
295 (2000)48141] Iwasaki Y et al. Phys. Rev. Lett.
21 (1992)[142] Krutov A F, Troitsky V E, Tsirova N A
Phys. Rev. C et al. [The BABAR Collaboration] Phys. Rev. D Lattice gauge theory , in:
Encyclopedia of Mathematical Physics ,Academic Press, Oxford (2006)[145] Creutz M
Phys. Rev. D et al. [PACS-CS Collaboration] Phys. Rev. D Z. Phys. C
253 (1981)[148] Aoki Y et al.
Nature
675 (2006)[149] Tsun T S
Electric–magnetic duality , in:
Encyclopedia of Mathematical Physics ,Academic Press, Oxford (2006)[150] Seiberg N, Witten E
Nucl. Phys. B
19 (1994) [Erratum-ibid.
486 (1994)][151] Affleck I, Dine M, Seiberg N
Nucl. Phys. B
493 (1984)[152] Seiberg N
Nucl. Phys. B
129 (1995)[153] Evans N J, Hsu S D H, Schwetz M
Phys. Lett. B
475 (1995)[154] Aharony O et al. Phys. Rev. D Adv. Theor. Math. Phys.
231 (1998) [
Int. J. Theor. Phys. Adv. Theor. Math. Phys.2