SSigns of Susy
Chris Wymant
A thesis presented for the degree of Doctor of Philosophy,2013
Institute for Particle Physics PhenomenologyDepartment of PhysicsUniversity of DurhamUK1 a r X i v : . [ h e p - ph ] J un or my father, for giving me a map but not directions.2 bstract After a brief introduction to 21 st century fundamental physics suit-able for the layman with a reasonable level of mathematical competence,I introduce the concept of unnaturalness in Standard Model electroweaksymmetry breaking and Supersymmetry (Susy) as a potential solution.The optimally natural situation in Susy in light of the 2012 discovery ofa Higgs boson is derived, namely that of almost maximal mixing , withthe scalar top partners almost as light as can be. The discovery is alsointerpreted numerically in terms of the Next-to-Minimal Supersymmet-ric Standard Model, with greater emphasis placed on the visibility of theHiggs boson at the observed mass, i.e. on signal strengths. I introducesimple models of gauge-mediated Susy breaking (GMSB), and how theirgeneralisation leads to a richer parameter space. I then investigate therole played by the mediation scale of GMSB: this is found to be as acontrol of the extent to which Yukawa couplings de-tune flavour-blindrelations set by gauge couplings. Finally, issues relating to the discov-ery or exclusion of Susy at colliders are discussed. Bounds are derivedfor the masses of new particles from Large Hadron Collider searches forexcesses of jets and missing energy without leptons, and compared toconstraints arising from Higgs boson searches, for models of GMSB andthe Constrained Minimal Supersymmetric Standard Model. I presenta novel search strategy for new physics signatures with two neutral,stable particles, when such particles are produced by boosted decays.(Susy examples include models with light gravitinos, pseudo-goldstinos,singlinos or new photinos.) The method is shown to produce sharp masspeaks that enhance the visibility of the signal. ontents I Prelude: From Classical Mechanics To QuantumField Theory 13
II Weak-Scale Susy’s Raison d’Etre: The Higgs 22
The
Stop Mass? . . . . . . . . . . 392.4 Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 m χ <
15 GeV . . . . . . . . . . . . . . . . . . 493.4 Higgs Signals For Arbitrary m χ . . . . . . . . . . . . . . . . . . 53 III Gauge-Mediated Susy Breaking 56 R Symmetry And Susy Breaking . . . . . . . . . . . . . . . . . 564.2 The Gauge Mediation Parameter Space . . . . . . . . . . . . . . 58
IV Susy Searches At The LHC 68 Introduction 70
Appendices 97Appendix A The Higgs As A Pseudo-Nambu-Goldstone Boson(‘Little Higgs’) 97Appendix B Optimal Naturalness Beyond Leading Log δm H u .
04 fb − SearchFor Susy With Jets And Missing Energy 107 eclaration The copyright of this thesis rests with the author. No quotation from it shouldbe published without the author’s prior written consent and information de-rived from it should be acknowledged. No part of this thesis has been sub-mitted for any other degree or qualification. Part I and Sections 1, 4 and6 are introductory. Sections 2, 3, 5, 7 and 8 are based on original researchdone by myself, the first of these working alone and the rest in collaborationwith others, published in articles referenced prominently at the start of eachSection. 6 cknowledgements
I gratefully thank the following people. • First and foremost, my collaborators during my graduate studies: DanielAlbornoz Vasquez, Genevi`eve B´elanger, C´eline Bœhm, Jonathan DaSilva, Christoph Englert, David Grellscheid, J¨org J¨ackel, Peter Richard-son, Michael Spannowsky, and especially my supervisor Valya Khoze.Thank you for your ideas and contributions to our shared projects, andfor many enlightening discussions. • Steven Abel and Ben Allanach for their detailed examination of thisthesis and the resulting suggestions. • Jeppe Andersen, Matt Dolan, Claude Duhr, Ilan Fridman Rojas, HendrikHoeth, Boaz Keren-Zur, Sabine Kraml, Daniel Maˆıtre, Alberto Mariotti,Michael Schmidt and Pietro Slavich, for helpful conversations. • Howie Baer, Sven Heinemeyer, Kees Jan De Vries, Thomas Rizzo, MarcSher, David Shih, Oscar St˚al, Carlos Wagner, and particularly OlivierMattelaer, Tim Stefaniak and Karina Williams, for useful correspon-dence. • Adrian Signer for teaching me Susy with a healthy scepticism from thestart. • Frank Krauss for a casual comment taken seriously: theorists wanting tobe listened to by experimentalists should suggest new signals. • Mike Hobson, Julia Riley and David Tong for inspiring undergraduateteaching, and Lukas Witkowski for a shared enthusiasm. • Other participants and particularly the organisers of both the “Implica-tions of a 125 GeV Higgs boson” workshop at LPSC Grenoble January2012, and the Carg`ese International School 2012, for stimulating envi-ronments that gave birth to ideas in this thesis. • Trudy, Linda and Mike for help with logistics and computing. • Jiri, for keeping me sane while I was running Durham men’s volleyball,and Tom for keeping me sane while I wasn’t. • All of my family, particularly my sister, mother and father for muchsupport over many years (and my Nana for a postgraduate course incooking!). • Surtout Catherine.This work was supported by the STFC.7 n Apology
Throughout this work I will generally refer to the massive scalar boson as-sociated with electroweak symmetry breaking as the Higgs rather than theHiggs boson , simply because the latter sounds awkward to my ears in passageswhen references to this particle come thick and fast. My apologies to PeterRichardson and any others who take exception to this.8 ists Of Figures And Tables
Figures which benefit from being printing in colour are found on pages 39 4046 50 51 54 65 67 68 69 77 79 81 83 91 94 and 95. Other pages in this thesiscan happily be printed in black and white with no loss of information.Figure Page Number1 382 393 404 465 486 507 518 519 5210 5411 5512 6513 6614 6715 6816 6917 7718 7919 8120 8321 8522 8623 9124 9325 9326 9427 9528 103 Table Page Number1 222 293 604 605 646 1079 bbreviations Used In The Main Text
BSM – beyond the Standard Modelc.c. – complex conjugateCMSSM – Constrained Minimal Supersymmetric Standard ModelEq. – EquationEWSB – electroweak symmetry breakingFig. – FigureGGM – general gauge mediationGMSB – gauge mediated supersymmetry breakingGUT – grand unified theoryirrep – irreducible representation (of a group)LEP – Large Electron-Positron (Collider)LHC – Large Hadron ColliderLHS – left-hand side (of an equation)LO – leading order (usually in the sense of a perturbative Feynman-diagrammaticcalculation)LSP – lightest supersymmetric particleMET – missing transverse energy, /E T = | / p T | MSSM – Minimal Supersymmetric Standard ModelNGB – Nambu-Goldstone bosonNLO – next-to-leading order (usually in the sense of a perturbative Feynman-diagrammatic calculation)NLSP – next-to-lightest supersymmetric particleNMSSM – Next-to-Minimal Supersymmetric Standard ModelpNGB – pseudo-Nambu-Goldstone bosonQCD – quantum chromodynamicsQFT – quantum field theoryRG – renormalisation groupRGI – renormalisation group invariantRHS – right-hand side (of an equation)SM – Standard ModelSusy – SupersymmetryUV – ultraviolet (in the sense of high energy scales)VEV – vacuum expectation valueWMAP – Wilkinson Microwave Anisotropy Probe10
Non-Selective Glossary Of Important/Confusing TermsFor BSM/Collider Phenomenology
Terms explained elsewhere in the glossary are italicised.
Acceptance – the fraction of events which pass the cuts in a given experi-mental analysis.
Background – anything which is not the physics one is trying to see (usuallynew physics), but unfortunately looks like it.
Background, reducible – a background consisting of misidentified or mismea-sured particles, which could therefore be eliminated (in principle) by perfectmeasurement.
Background, irreducible – a background with truly the same final state , thoughproduced via a different intermediate particle/particles.
Cuts – the set of requirements we choose to impose on the final state and phase space in order to significantly reduce the number of uninteresting events without decreasing the number of interesting events too much.
A Decade (of RG-running) – a factor of ten in between two energy scales. Forexample after two decades of running starting at an energy scale Q one wouldbe at a scale 10 Q (or 10 − Q ). Decay cascade – the set of one or more decay steps from the initially producedparticles to the final state . Event – a single collision of particles (specifically protons at the LHC) togetherwith what happens as a result of the collision. For simulated collisions, whathappens at an intermediate stage can be forced; for example one may simulate x events where particles collide and produce a Higgs boson which decays ar-bitrarily. Compare with experiment where requirements can only be imposedon detected particles, not intermediates, by definition. For example in x fbof data (which corresponds to a certain number of collisions y , i.e. y totalevents), after focussing only on events where the detected particles meet cer-tain interesting criteria one is left with z events ( z < y ). In this case unlikethe simulated case one cannot say with certainty what happened in any givenevent, and only statistical statements can be made. Observed events whichare more likely to have proceeded via intermediate new physics rather thanestablished physics (i.e. the background ) are described as candidate events. Final state – the set of outgoing particles (i.e. those that don’t decay furtherbefore being detected) resulting from a collision. Can be understood eitherin a precise sense more appropriate the calculation of amplitudes, such as afinal state of one electron, one electron neutrino and a u ¯ u quark pair; or in abroad sense more appropriate at the detector level, such as one charged lepton,missing energy and (a perhaps unspecified number of) jets. In either of these11enses the term final state usually refers just to the identity and multiplicity ofthe particles involved. However sometimes the term is taken to include someinformation about the phase space as well, e.g. ‘a final state of hard jets’. Hard – energetic. A hard scattering is one which involves a large exchangeof energy, for example the production of heavy particles in a collision; a hardobject is an object with large three-momentum (or simply large transversemomentum at hadron colliders where z momentum is less relevant). In bothcases the antonym is soft.High-level objects – isolated leptons, isolated photons, jets (as opposed to theindividual hadrons inside), missing momentum, all defined within the spatialcoverage of the detector. K-factor – the ratio of a cross-section (or possibly some other calculable observ-able) calculated at a given order in perturbation theory to the same quantitycalculated at the order below. Usually the ratio of NLO to LO is implied.
Matrix element – an element of the S matrix, i.e. the amplitude for a giveninitial state to interact and produce some specific particles. Multiplicity – number of. For example, jet multiplicity = number of jets.
Phase space – the space of all possible three-momenta for the (on-shell) finalstate particles.
Signal strength (in a given final state) – production cross-section multipliedby branching ratio (into the given final state ), possibly also multiplied by therelevant acceptance , and possibly normalised to some expectation.
Soft – see hard . 12 art I
Prelude: From ClassicalMechanics To Quantum FieldTheory (A Sketch For The Lay-Reader Familiar With CalculusAnd Vectors) v / (cid:28) c −−−→ S / (cid:29) (cid:126) − → Classical Mechanics Special RelativityQuantum Mechanics Quantum Field Theory
A reformulation of Newtonian mechanics, where an object of mass m expe-riencing a force F undergoes an acceleration a = F/m , is as follows. The
Lagrangian L is defined to be the difference between the kinetic energy T andthe potential energy V , and is a function of the coordinate q of the system(e.g. the position of a particle) and how rapidly that coordinate is changingwith time ˙ q (the dot denoting one time derivative): L ( q, ˙ q ) = T − V (0.1)The Action S is the integral of the Lagrangian over the time interval we areinterested in – from t to t . It is therefore a functional of q ( t ) – an infinite-dimensional function in a sense, in that it depends on the value of the function q ( t ) at each of the (infinitely many) instants between t and t . S [ q ( t )] = (cid:90) t t = t L ( q, ˙ q ) dt (0.2)Of all possible time-evolutions of the system q ( t ), the one chosen by our uni-verse is the one which minimises the action: the functional derivative of S with respect to q ( t ) vanishes: δSδq = 0 (0.3) We consider systems with more than one coordinate, e.g. multiple particles or oneparticle able to move through more than one dimension, in the same way as presented herebut replacing q → q = ( q , q , . . . ). I will focus on a single q for clarity when introducingfunctionals. To understand what a functional derivative is, a vector example helps. Take a singlenumber which is a function of a vector x , e.g. the dot product of the vector with some othervector: (cid:80) i x i b i . We can differentiate this single number with respect to the vector x i andthe result is a vector: b i . Writing it this way, with the index i unspecified, allows us to equation(s) of motion – can be obtained by solvingthe Euler-Lagrange equation : ∂L∂q − ddt ∂L∂ ˙ q = 0 , (0.4)where q and ˙ q are considered independent for the partial differentiation. Asan example, the familiar dynamics of a single particle of mass m in a potential V in one dimension x are obtained from the following Lagrangian: L = m ˙ x − V ( x ) (0.5) (0.4) −−→ m ¨ x = − dVdx (0.6)This shows how spatial variation of the potential forces the particle to accel-erate towards regions of lower energy. Special relativity tells us that when velocities become relativistic , i.e. non-negligible fractions of the speed of light c , we leave the regime of Newtonianmechanics. A further startling prediction relevant even at non-relativistic ve-locities is that mass is merely another form of energy – the now familiar relation E = mc (0.7)We can study the relativistic dynamics of individual objects, approximatedas particles: we find that time in a reference frame in motion relative to ourobservations passes more slowly, and that space contracts along the directionof motion. By analogy with the mixing in the directions called ‘left’, ‘right’, ‘infront’ and ‘behind’ as one spins – these quantities are not absolute but dependon the direction one is facing – time and space are not absolute, indeed theyare not even separate: they are mixed together by motion. We can thus thinkof them as separate parts of the same thing: spacetime , x µ =0 , , , = ( t, x ) =( t, x, y, z ).The context of special relativity is a good one to introduce the concept ofa field . Essentially, a field is just something which is a function of position inspace: in our three dimensions of space, it’s a function of x , y and z , and it canalso change with time. A scalar field is a field characterised by a single numberas a function of space and time. For example, one could describe the spatialand temporal variation of temperature with a scalar field – one number at eachpoint, giving the local temperature at that moment. A vector field is a field express simultaneously the different results that arise when we choose to differentiate withrespect to different components of the vector x . A function q ( t ) is an infinite-dimensionalvector of sorts – there is a different value associated with each ‘index’ t , with t continuous.By analogy with the vector example, when we differentiate a functional with respect to afunction, the result – a functional derivative – is a function. q and its time derivative ˙ q , now it is anintegral over the spatial region of interest V of a quantity defined at each pointin space: the Lagrangian density L , with density meaning per unit volume. L is a function of the field and its spacetime derivatives; the latter are obtainedwith the operator ∂ µ = ∂∂x µ . For example, for a scalar (just one number) field φ in three spatial dimensions x : φ = φ ( x µ ) L = L ( φ, ∂ µ φ ) L = (cid:90) x ∈ V L d x (0.8)Otherwise the equations of motion are obtained almost exactly as in the pre-vious section: S [ φ ] = (cid:90) t t = t L dt = (cid:90) t t = t (cid:90) x ∈ V L d x, (0.9) δSδφ = 0 (0.10)= ⇒ ∂ L ∂φ − (cid:88) µ ∂ µ ∂ L ∂ ( ∂ µ φ ) = 0 (0.11)As we take t → −∞ , t → + ∞ and V to be all of three-dimensional space V → R , in other words if we are interested in all spacetime, the action isa Lorentz invariant – the same number for all observers regardless of theirrelative motion.An Example: a free (non-interacting), massive scalar field φ : L = (cid:88) µ,ν g µν ∂ µ φ ∂ ν φ − m φ (0.12) (0.11) −−−→ (cid:88) µ,ν g µν ∂ µ ∂ ν φ + m φ = 0 , (0.13)where g µν is the metric of spacetime, which defines how to contract two space-time vectors together to obtain a Lorentz invariant : it is the diagonal 4 × A more familiar example of this kind of object is the metric of ordinary three-dimensional g µν = g µν = diag { +1 , − , − , − } . We can solve (0.13) by first takingthe Fourier transform of φ ( x µ ): expressing it as an arbitrary linear superposi-tion (actually integrating rather than summing) of exp( i (cid:80) µ,ν g µν p µ x ν ) termswith i is the imaginary number √ −
1. Plugging this into (0.13) we find theconstraint that p µ = ( √ ( m + p ) , p ), with p = ( p x , p y , p z ) arbitrary and thelinear superposition of terms still arbitrary. Each term in this superpositiontaken singly represents a wave with wave-vector/momentum p and an associ-ated energy E = √ ( m + p ).Note that the energy does not vanish as the momentum vanishes: E p → −−→ m .This energy gap between the vacuum (zero energy) and the lowest energy mode(the limit p →
0) is the mass of the field. The situation is the same as forparticles: there is a minimum energy E = mc needed to create a stationaryparticle, and further energy becomes its kinetic energy. If all speeds that we encounter in our problem are well below c , we do notneed to worry about extending classical mechanics with special relativisticeffects. However if we consider actions (recall Eq. (0.2)) so small that they’recomparable to the fundamental constant of nature (cid:126) – the reduced Planckconstant – we enter the realm of quantum mechanics. Here we discover thatthere is an inherent uncertainty at the microscopic level; for example a particlecannot possess a well-defined position and momentum simultaneously. Thestate of a system is described by a wave function , often denoted | ψ (cid:105) . Fora single particle this could be a probability distribution for its position andmomentum. We associate with each physical observable an operator . It is alaw of quantum mechanics that any time we measure a property of a system, we space, the three-by-three unit matrix g ij = diag { +1 , +1 , +1 } . When we contract two vec-tors together with this metric, a . b = (cid:80) i,j g ij a i b j = (cid:80) i a i b i , the result is invariant underrotations of the space, i.e. SO (3) transformations. A function is something which takes a number and returns another number; e.g. “2 x ”takes any number and returns that number doubled. An operator, simply understood,takes a function and returns another function; e.g. “ d/dx ” takes any function of x andreturns the first derivative. This definition of an operator makes sense in the quantummechanical context when we consider wave functions | ψ (cid:105) that are simple functions, such asposition/momentum probability distribution functions: the operator acting on | ψ (cid:105) returnssome other function. We may consider wavefunctions that are less readily understood assimple functions, however, such as those describing a particle’s intrinsic spin. In this contextthe more general definition of an operator – a mapping from one vector space to another –is appropriate: a quantum mechanical operator is a mapping from the space of all possiblewavefunctions onto that same space (but not necessarily the same point in that space!). The eigenvectors of an operator are points in the relevant space which are mapped back ontothemselves, multiplied by a constant called the eigenvalue . For example all points on the z axis are eigenvectors of the operator in three-dimensional space “rotate around the z axis”with eigenvalue 1 (note that points away from the z axis are not eigenvectors). Anotherexample is the operator “ d/dx ”, whose eigenvectors are e kx (with k any constant) witheigenvalue k . z axis – the state | ↑(cid:105) –or in the opposite direction – | ↓(cid:105) – or it may be a linear combination of thesetwo possibilities. | ↑(cid:105) and | ↓(cid:105) are eigenstates of the z -axis spin operator ˆ s z : • ˆ s z | ↑(cid:105) = + | ↑(cid:105) , • ˆ s z | ↓(cid:105) = − | ↓(cid:105) , • but ˆ s z ( | ↑(cid:105) + | ↓(cid:105) ) = + | ↑(cid:105) − | ↓(cid:105) (cid:54) = constant × ( | ↑(cid:105) + | ↓(cid:105) ), so this stateis not an eigenstate of ˆ s z and in measuring the z -axis spin we will neverobserve this state. If the system really is in this state, then when wemeasure the z -axis spin we force it to change state either into | ↑(cid:105) or into | ↓(cid:105) : this forced change is referred to as the collapse of the wavefunction .How (or indeed if) this really happens has been the subject of muchdebate, known as the measurement problem .In general when the system is in a state | ψ (cid:105) and we want to know theprobability of finding it in the particular state | φ (cid:105) (a probability which is notnecessarily 0 by definition, since | ψ (cid:105) may be a superposition of states, oneof which is | φ (cid:105) ), this probability is |(cid:104) φ | ψ (cid:105)| . The quantity (cid:104) φ | ψ (cid:105) is highlyanalogous to taking the dot product of two vectors to quantify how much theyoverlap; indeed | ψ (cid:105) and | φ (cid:105) are vectors in the (possibly infinite-dimensional)space of all possible states the system can be in.Time evolution of a state | ψ (cid:105) occurs according to the Schr¨odinger equation : ∂∂t | ψ (cid:105) = − i (cid:126) ˆ H | ψ (cid:105) , (0.14)where ˆ H is the Hamiltonian – the operator whose eigenvalues are the possibleenergies the system (which are sometimes discrete, e.g. the energy levels ofelectrons in atoms). From Eq. (0.14) we see that if the system is in a state ofdefinite energy – it is an eigenstate of ˆ H with eigenvalue E – its dependenceon time is simply given by a factor exp( − iEt ). If | ψ (cid:105) is not a state of definiteenergy, we can still write down a solution to Eq. (0.14) with the correct timedependence by inspection: | ψ ( t = T ) (cid:105) = exp (cid:18) − i (cid:126) ˆ HT (cid:19) | ψ ( t = 0) (cid:105) (0.15)However understanding what this solution really means may be highly non-trivial because in general ˆ H is the sum of non-commuting operators. If operators ˆ A and ˆ B do not commute, this means ˆ A ˆ B (cid:54) = ˆ B ˆ A . This is a strange conceptat first because we’re mostly used to normal numbers, and 2 × ×
2, but ˆ A could be the
17f at a time t = 0 a particle has definite position q , a state we call | q (cid:105) ,then at a later time T the state of the system is | ψ ( t = T ) (cid:105) = exp (cid:18) − i (cid:126) ˆ HT (cid:19) | q (cid:105) (0.16)The probability to observe the particle at a definite, different position q atthis later time is |(cid:104) q | ψ ( t = T ) (cid:105)| = (cid:12)(cid:12)(cid:12)(cid:12) (cid:104) q | exp (cid:18) − i (cid:126) ˆ HT (cid:19) | q (cid:105) (cid:12)(cid:12)(cid:12)(cid:12) (0.17)A gorgeous piece of mathematics shows that contents of the | . . . | can beevaluated as a path integral : (cid:104) q | exp (cid:18) − i (cid:126) ˆ HT (cid:19) | q (cid:105) = (cid:90) q ( t = T )= q q ( t =0)= q e iS [ q ] / (cid:126) D q, (0.18)where we integrate over the infinite number of possible functions of time q ( t )in the interval t ∈ [0 , T ] subject to the boundary conditions q ( t = 0) = q and q ( t = T ) = q . In a sense this is an infinite-dimensional integral, since thespace of possible functions is infinite dimensional. Non-relativistic quantum mechanics describes the time evolution of quantumsystems – particles moving, changing their intrinsic spin etc. – without al-lowing for the creation or destruction of particles. Constant particle numberis hard-coded in the theory. However special relativity tells us that mass ismerely another form of energy, and therefore particles can be created and de-stroyed. Quantum field theory (QFT) extends quantum mechanics to includethis new feature, and makes the physical laws Lorentz invariant. Central tothe idea of QFT is that there exists a field, permeating the whole of space,for each kind of fundamental particle; a single particle is simply a localisedexcitation of its field, able to propagate through space.The surface of a body of water can be described by a field (height as afunction of position) which can have two opposite kinds of excitation – peaksand troughs – both with positive energy but which are able to cancel each otherout. The same is true of the fundamental particle fields – they may supportboth particle and antiparticle excitations, and pairs of these may annihilate operation ‘rotate an object 90 ◦ left’ and ˆ B the operation ‘rotate an object 90 ◦ forwards’.You can see for yourself with whatever object you have to hand that the order in which theseoperations are performed matters. In the context of quantum mechanics, a bit of simplemaths shows that two operators representing two physical properties that cannot both takedefinite values simultaneously, such as position and momentum, must not commute; andindeed the position and momentum operators generally feature in the Hamiltonian. Dirac sea . The latter consistsof a vacuum which is an infinite number of negative energy particle statesall filled, and no positive states filled; antiparticles are then interpreted asavailable negative energy states, which can annihilate particles in an intuitiveway.In QFT the kinds of objects we wish to compute resemble those of normalquantum mechanics – probability amplitudes for a given initial state to laterbe observed as a given final state (see the glossary). A result familiar fromthe description of mundane every-day waves is that energy is inversely relatedto distance – higher energy waves have smaller wavelengths – and thereforeto understand nature at ever smaller length scales, the relevant processes forour calculation of probability amplitudes are those in which large amounts ofenergy are exchanged. Of course we must actually carry out these processes aswell, so that our calculations, and the theories that define them, can be tested.In practice this is done by colliding particles together as hard as we can, thenrecording the relative number of occurrences of the different final states thatresult (i.e. establishing probability distributions for the final states). Hencethe initial states for our calculations generally consist of two particles travellingtowards each other at a high relative velocity; the final states considered willbe everything that those two particles can produce after interacting with eachother in an arbitrary way . As with the quantum mechanical path integral,the probability amplitude for a transition between given states can be calcu-lated as the integral of exp( iS/ (cid:126) ) over the space of all possible things thatcould have happened during the transition. However previously this objectwas simpler: the integral was over all possible functions of time the coordi-nate could take, q ( t ). Now our transition is from two localised excitations offields, to some other number of localised excitations, and the associated fieldscould do anything in between times: for each of these fields (and indeed anyother field that interacts with them), we must integrate over all possible be-haviours as a function of time for each point in space ! In other words we havean infinite-dimensional integral of the form Eq. (0.18) associated with each ofthe infinitely many points in continuous space.These tremendously daunting mathematical objects have been computedexactly in certain theories simpler than those of direct relevance to our universeand our current particle colliders. In the latter cases, we typically have to resortto an expansion of the probability amplitude (that is, a systematic groupingof the infinite number of contributing terms into a series consisting first ofthe largest then the second largest etc. ad infinitum ) followed by a truncatedcalculation of as much of the series as we are able to do in a given number Note that not everything is possible – conservation rules such as the conservation ofenergy, momentum and electric charge imply that the final state must have the same valuesfor these quantities as the initial state.
19f man-hours. Each term in the expansion can be represented by a
Feynmandiagram , with the translation between diagram and term established by aset of universal
Feynman rules . Crudely speaking, the simpler the Feynmandiagram – the smaller the number of intermediate, localised excitations of thefields connecting the initial configuration of fields (two incoming particles) tothe final one – the larger the term it represents. This is because connecting theinitial and final states with an increasing number of interactions between fieldsusually means, via the Feynman rules, that the term has an increasingly smallpre-factor. Sometimes this is not the case, when the coupling between fields is‘strong’ (meaning strong enough to compensate for the small pre-factor); thenthe terms at each step of the expansion are as big as those preceding them,and without the ability to sum the entire infinite series we lose calculability.In the author’s opinion, this is the greatest unsolved problem in mathematicalphysics.An important difference between QFT and classical field theory is that theparameters appearing the in the Lagrangian density L no longer relate directlyto physical observables in the same way. For example, the dimensionless num-ber multiplying a term containing two or more different fields (thus allowingthe respective particles to interact), known as a coupling or coupling constant ,does not quantify the interaction strength, or at least not in a simple way. Therelations between the parameters of L and their corresponding observables nowcontain infinities, which have to be subtracted by hand in such a way as toleave a set of parameters that agree with the experimentally established valuesby construction. Note that choosing the parameters of a model of nature inorder to reproduce what is known to be true is often regarded as a bad thing,as science should make predictions that allow for falsification. However pre-dictivity is only lost when there is enough flexibility in the model parametersto accommodate any experimental result; in the case of QFT there are consid-erably more observables than those corresponding directly to the parametersof L , and these are genuine predictions, which match known measurements toa simply stunning level of precision.The removal of infinities to set the parameters of L , known as renormali-sation , must be done at a particular energy scale. An extremely deep featureof QFT is that, once this has been done, a different set of parameters is appro-priate for describing particle interactions at a different energy scale; in otherwords, the renormalised (infinity-subtracted) parameters of L run with energyscale. The coupled differential equations controlling this running – the renor-malisation group (RG) equations – are an example of something predictedby a given QFT, once each of the parameters of L has been defined at onescale. The physical picture often used to summarise the positive running ofthe electromagnetism coupling is the following. In probing a charged point-likeparticle with a low-energy photon, one is really probing only the rough area ofspace the charge is sitting in, due to the photon’s long wavelength. This spacealso contains virtual charged particle-antiparticle pairs by virtue of quantum20ncertainty, with the opposite charges tending to lie closer to the real physicalparticle (in the usual manner of polarisable media), shielding the charge that iseffectively seen. This effect decreases the smaller the wavelength of the photon,hence the increasing strength of the electromagnetic coupling with energy.21 art II Weak-Scale Susy’s Raisond’Etre: The Higgs
The Standard Model is the mathematical description of our most fundamen-tal understanding of particle physics. It is a quantum field theory in 3 + 1dimensions of spacetime with Poincar´e invariance, the gauge group SU (3) c × SU (2) L × U (1) Y , and the field content shown in Table 1.scalars fermions vector bosons gauge grouprepresentation H ( , , ) Q i ( , , ) u i ( , , − ) d i ( , , ) L i ( , , − ) e i ( , , g ( , , W ( , , B ( , , SU (3) c × SU (2) L × U (1) Y gauge group. i = 1 , , L SM specifies the physical behaviour ofthe particle excitations of these fields, by quantifying their masses, mixingsand interaction strengths. A point of great interest concerning L SM is that allterms except one are marginal or exactly renormalisable , being the product offields with total mass dimension 4 and a dimensionless coefficient. The single relevant or super-renormalisable term, having dimensionful coefficient, is themass-squared for the Higgs field: L SM ⊃ − m H H † H . (Masses for all the otherparticles arise from dimensionless couplings to the Higgs field, which acquires22 non-zero vacuum expectation value (VEV) v .) This term is at the heart of aproblem the Standard Model is widely believed to suffer from: unnaturalness.’t Hooft argued that a small parameter of the Lagrangian is naturally smallif the Lagrangian has an enhanced symmetry when the parameter vanishes [1].In that limit, the symmetry forces the radiative corrections to vanish, andso with the parameter non-zero the corrections must be proportional to theparameter itself. This is sometimes referred to as technical naturalness . Forexample, small electron masses in quantum electrodynamics are technicallynatural because chiral symmetry in the massless limit enforces δm ∝ m (1.1)However if we have a fundamental scalar φ then, outside of conformally invari-ant theories, allowing its mass-squared to vanish does not enhance the sym-metry; indeed explicitly calculating the radiative corrections one finds termslike δm φ ⊃ g π Λ , (1.2)where g is the coupling of φ to a particle that can run in a loop of the two-point correlator (cid:104) φ ( − p ) φ ( p ) (cid:105) , and Λ is an ultraviolet (UV) cutoff for the di-vergent integral. The hierarchy problem can be loosely phrased in the fol-lowing way: if Λ is parametrically larger than the renormalised mass-squared m φ = m φ, bare + δm φ , the cancellation on the right-hand side (RHS) of this equa-tion between the bare value and radiative corrections requires parametricallylarge fine-tuning; otherwise we would expect m φ = O ( ( g / π )Λ ). However,if Λ is to be understood as nothing more than a non-physical regulator to betaken to infinity, our question is not well posed. This leads us to the moreprecise technical hierarchy problem , which we have when Λ is interpreted as aphysical cutoff, below which our theory is an effective theory. When we enterthe regime of the latter by integrating out heavier degrees of freedom at thescale Λ, there will generically be corrections of this size to any scalar masses,suppressed by however many loop-factors are necessary to couple the scalar tothese heavy particles . Unless there is no new physics coupling to the Higgs atany higher energy scale, we then return to the conclusion of the simple hierar-chy problem that the Higgs mass should be O ( ( g / π ) Λ ). More concretely,in the Standard Model we have a mass-squared O (100 GeV) and a correction( y t / π ) Λ from the large coupling y t of the Higgs to the top-quark. We wouldtherefore require a cutoff Λ ∼ √ (8 π /y t ) 100 GeV = O (1 TeV). Taking thecutoff instead to be the Planck mass M P ∼ GeV – the highest scale atwhich the Standard Model without quantum gravity could be valid – requiresfine-tuning to one part in ∼ . One might consider new physics at a scale Λ which does not couple to Higgs at any orderin perturbation theory; it would therefore not couple to any part of the Standard Model –a fairly uninteresting scenario. Gravity at least must couple to the Higgs, as the latter hasmass and energy. fully explaining naturalness can be pushedfrom 1 TeV to higher energies if the Higgs is a pseudo-Nambu-Goldstone boson(pNGB); I briefly review the idea of these little Higgs models in Appendix A.(Note however that this still requires some new physics at the TeV scale –specifically top partners – to cancel the top quark correction to the Higgsmass.) Full explanations of naturalness, traditionally considered at the TeVscale (rather than at the higher scale permitted by little Higgs) generally fallinto one of the three following camps. For the first two I will merely state theidea without explaining details; a nice introduction to all three can be foundat [2], and there are several different sets of TASI lecture notes available onthese topics.Firstly, quantum-gravitational effects effects may become relevant, requir-ing extra dimensions. The Higgs could be the fifth component of a five-dimensional gauge field; masslessness in the limit of the fifth dimension be-coming large and flat means the Higgs is naturally light. Alternatively it couldbe a fundamental scalar confined to a physical four-dimensional subspace – a brane – with an effective momentum cutoff resulting from the extra dimensionsbeing warped [3].Secondly, the Higgs may be composite, i.e. (extensions of) the idea of
Technicolour . It is a bound state of a fermion and an anti-fermion ( techni-quarks ), which are confined by a new gauge group. Techniquark condensation(i.e (cid:104) ¯ q T C q T C (cid:105) becoming non-zero) spontaneously breaks the global SU (2) L × SU (2) R symmetry of the techniquarks to the diagonal SU (2) V giving threewould-be Nambu-Goldstone bosons (NGBs) – technipions, by close analogywith regular pions – which the hungry massless W and Z bosons eat insteadof the three would-be NGBs in the H doublet in the Standard Model. Tech-nicolour in particular is more attractive with a little Higgs setup than with-out [4, 5].Thirdly, and finally moving to the topic of this thesis, a light fundamentalscalar may be protected by Supersymmetry (see [6] for an extensive introduc-tion, review and many references; also [7, 8] for preliminaries), henceforth Susy .Susy generators convert bosons into fermions and vice-versa; a supersymmet-ric theory must therefore be constructed from objects called superfields, inwhich half the degrees of freedom are bosonic and half are fermionic; each halfis said to be the superpartner of the other, with exactly the same quantumnumbers excepting spin. In such theories, radiative corrections to the mass-squareds of scalars vanish at every order in perturbation theory: bosonic andfermionic loops always come in pairs, with the same magnitude but oppositesign. With Susy broken, but only softly – that is, where superpartners havedifferent dimensionful ‘couplings’ (including mass) but identical dimensionlesscouplings – quadratic divergences still vanish, since the associated couplingconstant must be dimensionless for the diagram to have dimension two. Thereare still logarithmically divergent corrections associated with the effective the-ory where one particle has been integrated out but its superpartner has not;24uch corrections are proportional to the scale of soft Susy breaking. In thismanner we may have a light Higgs naturally.Motivations for considering our universe to be supersymmetric are as fol-lows.1. The (technical) hierarchy problem as discussed.2. The symmetry group for our physical laws increased in size until we ar-rived at the Poincar´e group. Coleman and Mandula’s no-go theorem [9]‘proved’ (for a while) that there are no non-trivial extensions of thisgroup, i.e. that any extension would be the direct product of the Poincar´egroup and internal (non-spacetime) symmetries. Haag, (cid:32)Lopusza´nski andSohnius [10] found the loop-hole: non-trivial extensions are allowed butonly with fermionic symmetry generators, i.e. Susy. Susy is thus unique,in a fairly profound way, among possible extensions of the StandardModel. Intimately tied up with this feature is the widely believed unique-ness of Susy (or more precisely its generalisation from a global to a localsymmetry –
Supergravity ) as a setting in which a spin- particle mayexist (see e.g. [11, 12]).3. Susy is widely believed to be a requirement of consistent string theory,and the latter is widely believed to be the correct framework for a de-scription of gravity at the quantum level. The necessary existence of thelatter is thus a motivation for Susy.4. The three independent gauge couplings of the Standard Model, whenevolved using the renormalisation group (RG) to an energy scale O (10 GeV), almost meet at a point. This suggests the tempting possi-bility that with extra charged matter added at lower scales, they really domeet (which tends to happen in Susy) and there is only one fundamentalgauge group:
Grand Unification . There is strong historical motivationfor us to explain more phenomena with fewer principles. Proposing amuch enlarged field content, as we’ll see shortly is necessary, to solvethe one-parameter problem of making three lines meet at a point soundslike overkill. However in Grand Unified Theories (GUTs) there is an ex-tremely large scale with new physics that couples directly to the Higgs,thus removing any doubt about the applicability of the technical hierar-chy problem. Grand Unification thus motivates Susy for two reasons:(a) the superpartners modify the RG running of the gauge couplings ina way that generally improves the extent to which they meet at asingle scale; and(b) we fall squarely into the scope of the technical hierarchy problem,and with Susy the unavoidable, physical, O (10 GeV) contribu-tions to the Higgs mass-squared sum to zero.25. There is overwhelming evidence from astrophysical observations for theexistence of a new neutral particle which is stable on cosmological time-scales – dark matter , see for example [13] for a review. Often we imposea discrete Z symmetry called R parity on Susy theories in order tosuppress terms which would strongly violate experimental bounds onproton stability. With R parity, the lightest new particle with oppositecharge to the Standard Model particles – the Lightest SupersymmetricParticle (LSP) – is stable; if it is also neutral it is a dark matter candidate.However introducing Susy and the plethora of necessary particles solelyfor one of them to be dark matter is overkill – this problem may beaddressed more minimally than the (technical) hierarchy problem, oneexample being with the Peccei-Quinn axion which also solves the strongCP problem [14, 15].6. On July 4 th σ confidence level of a new bosonicresonance, with properties as measured so far coinciding closely with theHiggs of the Standard Model, and mass ∼
126 GeV [16, 17]. Now, Susy isa broad collection of different theories with different properties; howeverwhat’s common to almost all of them is that the Higgs mass m h is con-nected to the electroweak gauge boson masses ∼ M Z , and enjoys a specialprotection from radiative corrections. The authors of [18] studied m h inboth split Susy [19, 20, 21, 22] and high-scale / supersplit Susy [23], bothof which are versions of the Minimal Supersymmetric Standard Model(MSSM, see Section 1.3). Split Susy decouples all scalar superpartnersof the Standard Model fermions while leaving all fermionic superpartnerslight (with the possible exception of those of Higgs, according to taste);high-scale Susy decouples all non-Standard Model particles. The findingof [18] was that even sending superpartner masses to the Planck scale, m h remains below 157 (143) GeV in split (high-scale) Susy. The impor-tance of this result, beyond showing which models have m h ≈
126 GeVand which do not, is as an illustration of the robustness of the MSSMprediction for m h = O ( M Z ): low-scale Susy gives both electroweak nat-uralness and m h = O ( M Z ); in decoupling Susy we lose the former andthus part of the motivation, but we are still left with the latter. This isto be contrasted with the Standard Model, for which the prediction was m h (cid:46)
700 GeV for perturbative unitarity of
W W scattering at the TeVscale. From this point of view, Susy (or more precisely the MSSM andits not-too-distant cousins) gave a slightly better prediction for m h thanthe Standard Model.Note that points 1 and 4b motivate weak-scale Susy; points 4a and 5 have aslight preference for at least some Susy particles at the weak scale; points 2, 3and 6 motivate Susy but without a preference for its breaking scale, whichcould be anywhere up to M P .Having motivated Susy, I now give a brief explanation of what it is.26 .2 The Superpotential (See [8, 6] for much more thorough explanations than the sketch given here.)As I have mentioned, a supersymmetric theory should be built from superfields.Superfields can be considered simply as book-keeping devices, as they pair upbosonic and fermionic fields with otherwise identical quantum numbers. Moreformally they are functions of superspace , X ≡ ( x µ , θ α , ¯ θ ˙ α ), where θ α =1 , , ¯ θ ˙ α =1 , are two-component spinors of anti-commuting Grassman variables. The mostgeneral function of X , expanded in powers of θ and ¯ θ with x µ -dependentcoefficients, is of finite length: Grassman variables are nilpotent – they vanishwhen squared. This polynomial is a reducible representation of the group ofSusy transformations, and irreducible representations (irreps) can be made bytaking subsets of the finite number of terms. Two such irreps are left-handedand right-handed chiral superfields, each of which contains a single complexscalar, a fermion of the denoted chirality, and an unphysical/auxiliary scalar F . Expanding a left-handed superfield Φ in powers of θ and ¯ θ , the coefficientof the θ α θ α term is F ; then following the rules of Grassman integration, (cid:90) d θ Φ = F (1.3)Under a Susy transformation F can be seen to change by a total derivative, andso an action defined from a Lagrangian consisting of F -terms will be invariantunder Susy transformations. The product of left- (right-) handed superfieldstransforms itself like a left- (right-) handed superfield under the Susy group,and so the following Wess-Zumino
Lagrangian is supersymmetric: L = (cid:90) d θ W (Φ) + c . c ., (1.4)where W (Φ) ≡ χ i Φ i + M ij Φ i Φ j + y ijk Φ i Φ j Φ k (1.5)The quantity W (Φ) is the superpotential – a holomorphic dimension-three func-tion of all the left-handed chiral superfields Φ i in the model. Writing the scalarin Φ i as φ i , and with W ( φ ) the same function of these scalar fields as W (Φ) isof the corresponding superfields, the contribution of the superpotential to theregular potential V ( φ ) is V ( φ ) ⊃ (cid:88) i (cid:12)(cid:12)(cid:12)(cid:12) ∂W ( φ i ) ∂φ i (cid:12)(cid:12)(cid:12)(cid:12) (1.6)where this has come from Eq. (1.4) followed by a replacement of each non-physical F field by the solution of its (algebraic) equation of motion; the termson the RHS are thus referred to as F -terms.The other contributions to the potential come from so-called D -terms,which arise from supersymmetric gauge interactions. Spin-one vector bosonsassociated with gauge symmetries live inside a different representation of theSusy group called a vector superfield, with a fermionic partner called a gaugino27nd an auxiliary scalar D field. The coefficients in front these different compo-nent fields in the Lagrangian are fixed by Susy; however in the specific case ofa vector superfield for an abelian gauge symmetry, we can add to Lagrangianan arbitrary extra amount of the associated D field: L ⊃ kD . This is theFayet-Iliopoulos term [24]. Arranging all of the φ i in the theory into a vectortransforming in a (possibly reducible) representation of the full gauge groupwith generators T a , the D -terms are V ( φ ) ⊃ (cid:88) a ( g a φ † i T aij φ j + k a ) , (1.7)where the gauge coupling g a is of course common to different generators ofthe same gauge symmetry, but if the full gauge group is a product of differentgroups there is one coupling for each of these groups. To supersymmetrise the Standard Model we must place each of its fields (shownin Table 1) into a superfield, and include one more superfield [25, 26], expand-ing the set of degrees of freedom to those shown in Table 2. Each row (i.e. eachsuperfield) in Table 2 except the second one contains a field as in the StandardModel, with a new bosonic or fermionic partner. The second row, H d and ˜ H d ,is the new superfield. This is required, forcing us into a two-Higgs-doubletmodel, for two reasons. Firstly the Standard Model mechanism of generatingmasses for the charged leptons and down-type quarks through a coupling withcomplex conjugate of the Higgs doublet is not possible due to the requirementthat the superpotential be holomorphic. Secondly, our introduction of the ˜ H u fermion violates the SU (2) L × U (1) Y gauge anomaly cancellation conditions;they are restored with an extra fermion of opposite hypercharge Y , such as˜ H d .The superpotential of the R parity conserving MSSM is W MSSM = y u,ij ¯ u i Q j H u − y d,ij ¯ d i Q j H d − y e,ij ¯ e i L j H d + µH u H d , (1.8)where, with the standard abuse of notation, the same symbol is used for asuperfield and its scalar component (for the Higgs fields) or fermionic com-ponent (for the Standard Model fermions). In Eq. (1.8) SU (2) L and SU (3) c gauge indices within each term are implicitly contracted to form a singlet.The fermionic partners of the Higgs scalars and vector bosons are calledHiggsinos and gauginos respectively, with the latter consisting of a bino, winosand a gluino. The scalar partners of Standard Model fermions take the nameof the corresponding fermion with the letter ‘s’ prepended: for example theleft- and right-handed stops ˜ t L,R are the scalar partners of the left- and right-handed chiralities of the top quark, and we define the sfermions and sleptons etc. similarly. Non-Standard Model particles are collectively called sparticles .In the two, complex, scalar Higgs doublets there are eight degrees of free-dom. Three of these are the would-be NGBs that become the longitudinal28calars fermions vector bosons gauge grouprepresentation H u ˜ H u ( , , ) H d ˜ H d ( , , − )˜ Q i Q i ( , , )˜ u i u i ( , , − )˜ d i d i ( , , )˜ L i L i ( , , − )˜ e i e i ( , , g g ( , , W W ( , , B B ( , , SU (3) c × SU (2) L × U (1) Y gaugegroup. i = 1 , , h and H ( CP -even and electrically neutral, h defined to be the lighter), A ( CP -odd and neutral) and H ± ( CP -even and charged). The couplings of h toStandard Model particles become Standard Model-like in the decoupling limit ,where H, A, H ± are all several times heavier than the Z boson; my earlier pointin the final bullet point of Section 1.1 about ‘the’ Higgs of the MSSM beingrobustly of mass O ( M Z ) referred specifically to h . In this work, motivated bythe Standard Model-like couplings seen so far and non-observation of A or H ± ,in the context of the MSSM I will always consider the ∼
126 GeV resonance tobe h . At the time of writing, the possibility that this resonance is H is stillbeing debated – see [27, 28, 29].With unbroken Susy, the boson and fermion in the same superfield have thesame mass. Non-observation of any superpartner degenerate with its StandardModel partner means we have to add in masses for all the superpartners byhand. At first glance this seems a lot to swallow, rather than simply concedingthat the model is false. However the subset of all particles in the MSSM that wehave discovered so far is exactly the subset of particles whose masses only arisefrom electroweak symmetry breaking (EWSB), and the undiscovered subset isexactly the group of particles whose masses are not tied to EWSB. This fact,which we have not put in by hand, is highly encouraging.29he general tree-level vanishing [30] of the supertrace STr M ≡ (cid:88) all particles ( − s (2 s + 1) M = 0 (1.9)where M is a particle’s mass and s its spin, means that spontaneous breaking ofSusy from within the MSSM will be phenomenologically unacceptable – somesuperpartners will be lighter than their Standard Model partners, whereaswe need them all to be heavier. Susy breaking must therefore be a higher-order/quantum effect, mediated to the MSSM from elsewhere – the hidden sec-tor. The three main approaches to achieving the mediation are through grav-itational interactions, extra dimensions, or gauge interactions (see Part III).Susy breaking results in mass-squareds m φ and Majorana masses M ˜ λ i forall of the scalars φ = H u .H d , ˜ Q i , ˜ u i , ˜ d i , ˜ L i , ˜ e i and gauginos ˜ λ i = ˜ B, ˜ W , ˜ g respec-tively in Table 2. It also gives the trilinear mixing terms L ⊃ − ˜ u i a u,ij ˜ Q j H u +˜ d i a d,ij ˜ Q j H d + ˜ e i a e,ij ˜ L j H d + c . c . ; each a ij matrix is usually assumed to be pro-portional to the corresponding Yukawa matrix. In the basis where these arediagonal, we have for example a = y t A t ; the stop trilinear mixing parameter A t gives potentially large corrections to the physical Higgs mass as we willsee in the following section. The final term arising from Susy breaking is thedimension-two mixing term for the Higgs scalars L ⊃ − bH u H d + c . c . .In addition to the Susy breaking masses there is a single supersymmetricmass parameter in the MSSM, for the two Higgs superfields: the µ term. Thiscompletes the list of dimensionful terms in the MSSM. Those relevant for thetwo CP -even electrically neutral scalars H u and H d are: V ⊃ ( H u ( H d ) ∗ ) (cid:18) | µ | + m H u − b − b | µ | + m H d (cid:19) (cid:18) ( H u ) ∗ H d (cid:19) (1.10)For EWSB one linear combination of H u and H d must have a negative mass-squared to destabilise the origin. The resulting VEV then lies partly alongthe H u direction and partly along the H d direction, and is characterised bytan β = v u /v d . It is instructive (and motivated – see Section 2) to considerlarge tan β (cid:29)
1; in this limit H u alone corresponds to the single CP -even andneutral degree of freedom in the Standard Model H doublet. Its (negative)mass-squared – the upper-left entry of the matrix in Eq. (1.10) – then leadsto all Standard Model masses. For example we have − M Z = m H u + | µ | + O ((tan β ) − ) . (1.11)An appealing feature of Susy, once it is broken, is the possibility of radiative EWSB. Even if m H u is positive at the high-scale Λ where mediation of Susybreaking to the visible sector takes place, H u couples to the top sector viathe large top Yukawa coupling resulting in a strong tendency for m H u to bepushed negative by RG evolution from Λ to the electroweak scale. This strongradiative correction of m H u motivates us to write this explicitly it as − M Z = m H u (Λ) + δm H u + | µ | + O ((tan β ) − ) . (1.12)30ith this equation we are ready to discuss naturalness. Probably the most common measure of naturalness or fine-tuning is the Barbieri-Giudice measure [31], which can be calculated for UV-complete models wherea set of fundamental parameters p i set all of the masses at the scale Λ. Fora given point in p i space that results in the observed value of M Z (throughEq. (1.12)), one calculates derivatives of log M Z with respect to log p i ; M Z is taken to be natural if all such derivatives are (cid:46) O (1) – doubling a funda-mental parameter at most doubles the resulting M Z . If one of the derivativesis considerably larger than this, the associated p i needs to be finely tuned toproduce the observed M Z . However this measure does not penalise a situationthat we should still regard as unnatural.If a single parameter p sets both m H u (Λ) and all of the (most important)masses involved in the radiative correction δm H u , it could set these in sucha pattern that m H u (Λ) and δm H u happen to cancel each other out even ifeach term separately is very large, and we have insensitivity of M Z to theparameter p . This is known as focus-point Susy . However the cancellationdepends sensitively not only on this mass-setting pattern but also on the valueof the top Yukawa , and weakly on the scale Λ; unless these three are linked bysome symmetry the cancellation is accidental. A natural theory, by contrast,does not have large cancellations except those enforced by symmetries. (Highsensitivity to the imprecisely known top mass also makes it uncertain whethersuch models have EWSB at all [32].)A stricter criterion for naturalness is simply to ask that none of the termscontributing to the right-hand side of Eq. (1.12) are dramatically larger than M Z , in the manner of Kitano and Nomura [33]. This has the further ad-vantage of allowing bottom-up deductions to be made, i.e. without knowingthe underlying high-scale theory, for example as was done in [34] to lay outrequirements on a natural spectrum.The stop is the chief contributor to δm H u , and thus we require light stopsfor naturalness. We can also see this intuitively – the Standard Model Higgscouples most strongly to the top, so to protect it from strong radiative correc-tions we want Susy broken as weakly as possible in the top sector. Searches atthe Large Hadron Collider (LHC), however, exclude squarks up to masses ex-ceeding 1 TeV in the strongest cases [35]; these are when all three generationsof squarks are degenerate and decay to hard (see the glossary) jets and a lightLSP carrying away large missing transverse energy /E T .One way to ease the tension between such bounds on squarks and the desirefor light stops is to change the way the squarks decay, for example to an LSP The squared top Yukawa is a prefactor to the δm H u – see Section 2.1 – so cancellationbetween m H u (Λ) and δm H u to 1 part in N happens only for an ad hoc tuning of the topquark mass to 1 part in 2 N . soft (see the glossary), andmay fail to be detected or simply be less visible over the large backgrounds (see the glossary) for soft jets. The final state (see the glossary) then has nolarge visible or invisible transverse energy at leading order, but may still beconstrained due to the possibility of recoil against hard initial state radiation.See [36] ([37]) where hard emission of a jet (photon) by the initial state in suchcircumstances is studied.A second method of weakening these bounds is, rather than removing thevisible energy, to remove the /E T by having the LSP decay through an R parity violating coupling: see for example [38]. Hadronic R parity violatingdecays are a prime example of a signal with rich jet substructure – giving fatjets with large masses and containing many subjets [39]. In [40] such decaysof boosted hadronising gluinos are found to show soft-radiation patterns asexpected from colour singlets, with generalisations of the N - subjettiness [41]and pull [42] variables able to exploit this.Thirdly, rather than hide the decay of the squarks one can suppress theirproduction cross-section while still keeping them light, by having the gluino bea Dirac fermion rather than Majorana (requiring an extension of the MSSMfield content). Squark pair-production by t-channel gluino exchange is thensuppressed – see [43].Finally, what is undoubtedly the most effective way to keep stops light isto drop the assumption of approximate mass degeneracy between the threegenerations of squarks, having the first two generations heavy and the thirdgeneration light. Constraints on the latter alone are considerably weakened dueto direct production cross-sections suppressed by parton distribution functions,and the less distinctive final state signals that may result, typically being toosimilar to the large Standard Model top backgrounds. Indeed stops decayingto tops and stable neutralinos can still be as light or lighter than the topquark [44, 45, 46, 47], and a large number of alternative decays are possible,particularly if one extends the MSSM.The more minimal MSSM in which we keep light only the particles im-portant for naturalness (including the stops) and decouple the rest (includingthe first two generations of squarks) is referred to as Effective or Natural
Susy,introduced in [48, 49] and revisited more recently in [34, 50]. The authorsof [34] argued that the inter-generational squark mass splitting should be afeature of the mediation of Susy breaking rather than an RG-running effect,since same coupling that drives the latter effect also gives strong running m H u – precisely what we are trying to avoid . Models which achieve such mediation An exception would be a heavy right-handed sbottom at large tan β , which would causethe left-handed stop to run lighter than the left-handed sup and scharm without drivingthe running m H u . However it is the integral of the running stop masses that gives δm H u ,so their starting heavy but running light only half solves the problem; furthermore we needboth stops to be light, not just one. I therefore do not consider this possibility to contradictthe argument of [34]. SU (3)to SU (2) to nothing; when this happens above the Susy-breaking mediationscale Λ and both gauge groups are involved in the mediation, a light third gen-eration results. In [52] the Standard Model gauge group splits, at high scales,into one SU (5) group which mediates Susy breaking and one which does not(with a bifundamental link field obtaining a VEV to break to the diagonalgroup at low scales). The first two generations are charged only under themediating group; the third generation and Higgs fields are charged only underthe non-mediating group, and thus couple less strongly to the source of Susybreaking.The conclusion is that light-stop scenarios are desirable for naturalness, canbe realised in concrete models, and are not excluded by direct searches. Theyare, however, in some tension with another experimental result – the putativeHiggs signal of mass ∼
126 GeV. I discuss this in the following section.33
Optimal Naturalness
This section is based on my single-authored work [53]; the text here follows itclosely.
In the MSSM the tree-level m h is bounded from above by M Z cos 2 β , andsaturates this bound in the aforementioned decoupling limit. Moderate tolarge tan β (say 5 or greater) helpfully raises cos 2 β (to 0 .
92 or greater). Eventhen, as has long been known, some substantial combination of stop mass-and stop mixing-induced corrections to m h is needed to lift it above the lowerbound of 114 . ∼
126 GeV Higgs requires these corrections to be even more substantial, withcorrespondingly worse implications for naturalness. Investigations of super-symmetric Higgs bosons in light of the ∼
126 GeV discovery has become a fieldin its own right; at the time of writing [53], the interplay of parameters forsuch a Higgs mass when looking agnostically at the MSSM had been studied in[54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71]. Other worksat that time had shown the implications of such a Higgs mass in particularmodels of Susy breaking, in extensions of the MSSM, or else had focused pre-dominantly on issues relating to the decays of such a Susy Higgs into differentfinal states; the literature concerning all of these topics has continued to growsince.
The dominant radiative correction to the physics Higgs mass is δm h ≈ π m t v (cid:20) log (cid:18) M S M t (cid:19) + X t M S (cid:18) − X t M S (cid:19)(cid:21) , (2.1)where v = 174 GeV, X t = A t − µ cot β and M S is an average of the two stopmasses. The second term in square brackets is the threshold correction tothe Higgs self coupling from integrating out both stops, and the first is theStandard Model Higgs self coupling beta function integrated (at leading logorder) from that threshold down to the top mass scale, where the runningHiggs mass coincides closely with the pole Higgs mass. From the first term wesee how large stop masses (which we don’t want for naturalness) help to boostthe Higgs mass (which we do want for m h ∼
126 GeV). The second term – themixing term – comes to our aid: it too can be used to raise m h . Maximal mixing refers to this term being maximal; it therefore allows minimal stop masses fora given m h and thus is naively the most natural arrangement, motivating muchattention in the aforementioned literature and elsewhere. However the mixingterm, like the stop masses, also contributes to unnaturalness as we will see andso a priori it is not clear that maximising it is the best thing to do.Unnaturalness arises from excessive running of m H u . At one loop, the latter34s [72]:16 π ddt m H u = 6 y t ( m Q + m u + A t )+ 6 y t m H u − g M − g M + 35 g Tr[ Y ˜ f m f ] , (2.2)where t = log( Q/ Λ), with Λ the high/mediation scale at which the soft Susy-breaking mass terms are generated. One can roughly neglect the terms ofthe second line ; keeping only the large stop-sector terms, taking these to beconstant and integrating gives the leading log expression δm H u ≈ − π y t ( m Q + m u + A t ) log (cid:18) Λ M S (cid:19) , (2.3)at a scale M S – the scale at which Eq. (1.11) holds most accurately [74, 75, 76].Before connecting Eq. (2.3) to the physical Higgs mass m h , I note that ittells us something about stop naturalness on its own. It can be re-writtenin terms of the stop mass eigenvalues: taking the tree-level stop mass matrixwithout the subdominant electroweak D -term contributions, we have δm H u ≈ − π y t (cid:34) m t + m t − m t + ( m t − m t ) m t cos θ ˜ t sin θ ˜ t (cid:35) log (cid:18) Λ M S (cid:19) (2.4)where θ ˜ t is the stop mass mixing angle. In [64] it was argued that the fi-nal term in square brackets motivates m ˜ t ∼ m ˜ t for naturalness; then sincethe left-handed stop shares a mass with the left-handed sbottom ( m ˜ Q ), non-observation of sbottoms translates into constraints on both stops. However inEq. (2.3) we can define the average stop mass by 2 M S = m Q + m u , and thereis explicit insensitivity to m Q − m u which will split the mass eigenvalues. Thediscrepancy arises from the neglected cos θ ˜ t sin θ ˜ t factor in Eq. (2.4), whichgoes to zero as we pull apart m Q and m u . We see that in fact the two masseigenvalues can be arbitrarily split without naturalness penalty.I now want to find what Eq. (2.3) tells us in conjunction with the physicalHiggs mass-squared – Eq. (2.1) plus the tree level value ∼ ( M Z cos 2 β ) . Firstly,note from Eq. (1.12) that the value of | µ | required for the correct M Z dependson the unknown high-scale value of m H u , and | µ | enters the physical Higgs The effect of m H u on its own running is small if the leading log approximation is valid(i.e. (one-loop factor) × log(Λ /M S ) < m H u negative, the m H u term in the beta function must beappreciably smaller than the other terms. The electroweak couplings are somewhat smallerthan y t . While the trace term is a sum over all scalars, it couples only through g and is‘relatively small in most known realistic models’ [6]. For example it vanishes at the high scalein all models of General Gauge Mediation [73], and all models with universal scalar masses(such as minimal supergravity) since Tr[ Y ] = 0. Furthermore the running of the trace isproportional to the trace itself. The wino term on the other hand may be appreciable [68],but here I will be differentiating with respect to stop-sector terms, so this effect drops out. X t = A t − µ cot β . However, (a) the aim for naturalSusy is | µ | / (100 GeV) (cid:46) a few, (b) a large Higgs mass ∼
126 GeV needs tan β (cid:38) O (5), and (c) later we will arrive at A t (cid:38) O (1 TeV). Thus we expect X t to be very close to A t without knowing the precise value of µ .Secondly, we see that while the physical Higgs mass depends only on theaverage stop mass M S , δm H u depends on both M S and the precise linear com-bination m Q + m u . We then must choose a definition of M S . Often thisis taken to be a geometric mean; the minimum ( m Q + m u ) for constant( m Q × m u ) / then provides weak motivation for m Q = m u . If instead thelinear average M S ≡ ( m Q + m u ) is chosen, the orthogonal linear combina-tion is entirely free as previously mentioned. A further alternative would beto take an average of the mass eigenvalues m ˜ t , : the dependence of δm H u onthe underlying parameters m Q , m u , A t then shifts very slightly but becomesmuch less transparent, as we have already seen. We can appeal to the limit log(Λ /M S ) (cid:29) log( m ˜ t /m ˜ t ), in which the former log and thus δm H u has nosensitivity to how M S is defined. We can thus take the aforementioned linearaverage, so that the functions δm h and δm H u depend on the stop sector simplythrough M S and A t . Note that though other particles besides the stop makesmaller contributions to both the physical Higgs mass and unnaturalness, be-low I will differentiate with respect to stop-sector parameters and so this effectdrops out.Having now made δm h and δm H u functions of the stop sector through theparameters M S and A t only, we can find optimal naturalness – maximal δm h for minimal δm H u – with Lagrange constrained optimisation. The solution of ∂∂ ( M S ) (cid:0) δm h − λ δm H u (cid:1) = ∂∂ ( A t ) (cid:0) δm h − λ δm H u (cid:1) = 0 , (2.5)where λ is the unspecified Lagrange multiplier, gives the most natural ratio x ≡ A t /M S , with the scale of one of these two dimensionful parameters freelychosen thereafter. Using δm h at one-loop (2.1) and δm H u at leading log (2.3),I find x natural ≡ (cid:18) A t M S (cid:19) natural = 2 + (cid:114) L − L − ∼ , (2.6)with L = log(Λ /M S ). The solution is real for L > , asymptotes to 2 + √ ≈ .
16 as L → ∞ , and is already 5 for L = 7 (i.e. Λ /M S = 33) – thus it isessentially constant over phenomenologically interesting mediation scales andstop masses. That the optimal x should be close to six is not surprising: usingthe logarithmic stop mass term to boost the Higgs mass requires exponentiallyheavy stops and thus exponentially bad fine-tuning; whereas the stop mixingterm contribution to m h can be large even for small A t and M S , provided Unless one enters the realm of split or high-scale Susy M S (cid:38) O (10 , GeV), [18]. Λ /M S is very large in all but the most extreme cases; m ˜ t /m ˜ t cannot be large if weintegrate out both stops together to calculate the Higgs mass. x must in fact be less than themaximal mixing value x = 6: decreasing it from 6 to 6 − δ reduces the physicalHiggs mass by O ( δ ) but increases naturalness by O ( δ ). We see that almostmaximal mixing is optimal. Higher order effects of the stop on the physical Higgs mass can be taken intoaccount with the two-loop expression of [77]: δm h = 34 π m t v (cid:20)
12 ˜ X t + (1 + D ) T + (cid:15) (cid:16) ˜ X t T + T (cid:17)(cid:21) , (2.7)with m t = M t π α ( M t ) ,α ( M t ) = α ( M Z )1 + π α ( M Z ) ,T = log M S M t ,D = − M Z m t cos β, ˜ X t = 2 A t M S (cid:18) − A t M S (cid:19) , and (cid:15) = 116 π (cid:18) m t v − πα ( M t ) (cid:19) (which also includes the smaller, soft-mass independent, one-loop D -term O ( M Z m t ) of [78]). The optimisation, Eq. (2.5), goes through exactly as be-fore. The solution is the positive root of the following equation (which recoversEq. (2.6) as D, (cid:15) → (cid:15)T + L ( − (cid:15) − (cid:15)T )] x + 4 [ − − (cid:15)T + L (1 − (cid:15) + 2 (cid:15)T )] x natural − (cid:15)T + L ( − D − (cid:15)T )] = 0 (2.8)I show the variation of this solution with M S in Fig. 1; dependence ontan β ∈ [5 ,
45] and the top quark mass uncertainty is negligible.Two other approaches are trivially equivalent to using Eq. (2.5) to find x natural . Firstly, one could invert the δm H u expression to find the function M S ( x ) | δm Hu for how the stop mass must vary as a function of x in order tokeep δm H u constant: from Eq. (2.3) this monotonically decreasing function is M S ( x ) | δm Hu = Λ exp (cid:18) W − (cid:18) − π δm H u (2 + x )Λ (cid:19) (cid:19) (2.9)37 M S (cid:144) GeV x natural Figure 1: The most natural ratio x ≡ A t /M S obtained from maximising theHiggs mass at one loop (red, dashed) and two-loop (blue, solid) for constantelectroweak symmetry breaking term δm H u , as a function of the average stopmass M S .where W − ( . . . ) is the lower branch of the Lambert W function . The one-parameter function δm h ( x, M S ( x ) | δm Hu ) then gives the range of Higgs massespossible for a given δm H u ; the maximum occurs at x natural .Secondly, one could invert the δm h expression to find the function M S ( x ) | δm h for how the stop mass varies as a function of x for a constant Higgs mass. Thisfunction is easily obtained from Eq. (2.7) which is a quadratic equation inlog( M S /M t ); I plot it in the left panel of Fig. 2. The one-parameter function δm H u ( x, M S ( x ) | δm h ) then gives the range of δm H u possible for a given Higgsmass, depending on the amount of stop mixing ( x ) one uses to achieve thatHiggs mass. The minimum occurs at x natural . I plot this in the right panel ofFig. 2, normalised to M Z for a transparent indication of fine-tuning.The different colours (line styles) in Fig. 2 correspond to different m h (tan β ), see the caption. We see that the greater the m h we require (andthe lower tan β is), the larger x must be to even find a solution: no-mixingscenarios are more limited in the Higgs mass they can reach before the m h expression (2.7) breaks down. Indeed as was noted in [58], even using theprogram FeynHiggs [79, 80, 81, 82] for a higher-order calculation, in the no-mixing x = 0 scenario breakdown occurs before one can reach m h ≈
126 GeVand one must resort to a matching of the MSSM onto the Standard Model(with RG evolution from M S to M t resumming the large logs of this ratio ofscales).The left panel of Fig. 2 illustrates the obvious fact that the smallest stopmass for a given Higgs mass occurs at exactly maximal mixing x = 6. Closeinspection of the right panel shows the more subtle point that the lowest fine-tuning occurs at almost maximal mixing. We see from the flatness of the curvefor x ∈ [5 , the multivalued function W ( z ) satisfying z = W e W , with the lower branch W − definedin the interval ( −∞ , − /e ]. M S m h GeV (cid:68) (cid:45) Figure 2: x axis: x ≡ ( A t /M S ). The left panel shows the average stop mass M S required for constant Higgs mass m h ; the right panel shows the fine-tuning∆ − ≡ | δm H u | / ( M Z ) that results. The mediation scale Λ is taken to be10 GeV (10 GeV would increase the fine-tuning by a factor ∼ m h = 115 GeV, green curves (the middle two) m h = 119 GeV, and blue curves (highest) m h = 123 GeV. Dashed (solid) lineshave tan β = 8 (30). I take M t = 173 . The
Stop Mass?
Varying M S while keeping x = x natural fixed traces out the Higgs mass thatresults in this most natural setting. Of course to go to the full Higgs mass fromonly the stop radiative corrections one must either neglect the corrections fromother sparticles (and so certainly steer clear of the large bottom-Yukawa regimeat tan β (cid:38) m t m b ), or else pick some ‘representative’ value for all other sparticlemasses and calculate their fixed contribution. I do the former in Fig. 3. I willfirst explain the range of validity of Fig. 3 before discussing the uncertaintyarising from the top quark mass, shown with grey bands.The Higgs mass expression (2.7) arises from the effective theory in whichboth stops have been integrated out at a single scale M S , thus requiring m ˜ t (cid:38) m ˜ t [77] which is tantamount to a lower bound on M S for validity of theexpression. The lower bound is minimal when m Q − m u (which I have arguedcan be freely chosen) vanishes; I plot the resulting stop mass eigenvalues also inFig. 3. The bound m ˜ t (cid:38) m ˜ t can be seen to imply M S (cid:38)
850 GeV. Eq. (2.7)also does not contain higher-order logs O (log ( M S /M t )), giving a correspond-ing upper bound for its validity. Its accuracy is ∼ M S (cid:46) . M S ∼ . x natural becomes as high as 6(and takes higher values still for M S (cid:38) . x about the value 6, but naturalness always favours lower values to minimise A t .From Fig. 3, we see that at M S ∼ . M s (cid:144) GeV m h GeV m t (cid:142) GeV 1000500 2000300 1500700120122124126128 500.1000.1500.2000. M s (cid:144) GeV m h GeV m t (cid:142) GeV1000500 2000300 1500700120122124126128 500.1000.1500.2000. M s (cid:144) GeV m h GeV m t (cid:142) GeV 1000500 2000300 1500700120122124126128 500.1000.1500.2000. M s (cid:144) GeV m h GeV m t (cid:142) GeV
Figure 3: Blue solid lines, right-hand y axis: the tree-level stop mass eigen-values m ˜ t , , assuming m Q = m u . Red dashed lines, left-hand y axis: thetwo-loop expression of [77] for the mass of the lightest CP-even Higgs boson m h , valid for 850 GeV (cid:46) M S (cid:46) m h compatible with the ATLAS Collaboration’s measurement m h =(125 . ± . +0 . − . (syst)) GeV [83] and the highest m h compatible with theCMS Collaboration’s measurement m h = (125 . ± . ± . M S , withthe ratio A t /M S taking its most natural value as defined in Eq. (2.8) andplotted in Fig. 1. Grey shading shows the Higgs mass uncertainty due tothe top quark mass uncertainty. Upper panels take the top quark polemass as measured by the ATLAS and CMS Collaborations and the Tevatron: M t = (173 . ± .
7) GeV [85]; lower panels take M t = (173 . ± .
8) as extractedfrom the Tevatron’s σ ( pp → t ¯ t + X ) measurement [86]. The left (right) panelsare for tan β = 9 (30). 40ith respect to M S vanishes , which is purely an artefact of the truncatedexpression. The Lagrange constrained optimisation, Eq. (2.5), is then solvedby the Higgs mass alone maximised with respect to both of its arguments,with the Lagrange multiplier λ vanishing i.e. the naturalness considerationdecouples. Hence the solution is pushed onto exactly maximal mixing. Evenhigher order terms in the Higgs mass expression would be needed to push thisbreakdown point out to higher stop masses.The authors of [85], following a similar analysis to [87], take the top quarkmass measurement relevant for calculation of the (Standard Model) Higgs massto be a combined measurement of the pole mass from the ATLAS and CMSCollaborations and the Tevatron: M t = (173 . ± .
7) GeV. In [86] it was arguedthat direct experimental measurement of the top quark pole mass gives a the-oretically ill-defined quantity, and that a more theoretically rigorous approachis to extract the running mass from measurement of the top pair productioncross-section, and thence obtain M t = (173 . ± .
8) GeV. I show both casesin Fig. 3; the choice of error in M t has a striking effect on the Higgs massuncertainty.An initial hope for this work was to see whether a given Higgs mass couldgive an indication of the average stop mass, using the principle of optimalnaturalness to reduce the function m h ≈ m h ( M S , x ) → m h ( M S ) | x = x nat (2.10)The latter is shown in Fig. 3 with its uncertainty arising from the top quarkmass uncertainty ∆ M t ; where it intersects with the observed Higgs mass, whichhas its own error ∆ m h , we see which stop masses are possible. Fig. 3 showsthat even with the simplification of Eq. (2.10), the uncertainties ∆ M t and ∆ m h alone make any inference of M S from m h very difficult. This is compounded bya theoretical uncertainty in the calculation of m h , widely taken to be ∼ M S compatible with the observed m h , found for large tan β and conservative∆ M t , can be read off as ∼
350 GeV; however the stop quark mass eigenvaluesare then too split to trust a calculation based on integrating them both outat once (as discussed earlier). Fig. 3 also makes clear that even higher orderterms than the two-loop corrections to m h are necessary to constrain M S fromabove, as the monotonic increase of m h with M S needs to be captured. (Anupper limit on M S without naturalness is given in [18] – a few 10 GeV for splitSusy and unconstrained for high-scale Susy; MSSM-to-SM matching is neededto calculate m h with stops far beyond the weak scale.) Note that the derivative of interest is m h with respect to M S , with x held constant;in Fig. 3 the latter is not constant. However it is varying sufficiently slowly that whenwe instead hold it exactly constant, the relevant derivative still vanishes at the same point M S ∼ . Necessarily for a choice of tan β ; large but less than m t m b ∼
40 gives us a maximal Higgsmass and without a fine-tuning penalty, which is clearly optimal.
41 consider RG improvement to go beyond a leading log expression for δm H u ,but relegate this to Appendix B as the discussion is more involved thoughultimately gives the same x natural . This study being analytic throughout, it is complementary to the many numer-ical investigations of the Higgs in Susy performed recently, illustrating moreclearly the Higgs-stop-naturalness connection. I have shown that almost max-imal mixing , with x ≡ A t /M S slightly lower than 6, is optimal; though I havealso shown that the distinction between this case and maximal mixing x = 6is academic. In other words to achieve a given Higgs mass m h , balancing A t and M S to optimise naturalness gives almost the same result as simply try-ing to minimise M S . (Note that maximal mixing is not ‘needed’ to achieve m h ≈
126 GeV, as has been reported for example in [89] – Fig. 6 of [58] forexample shows that M S = O (5 TeV) is sufficient with no mixing – it is merelya less tuned method of doing so.)However conversely, even remaining in the MSSM, comparing how easilydifferent models accommodate a 126 GeV Higgs (a major focus of recent Susyphenomenology) based on how light the stops are is misleading. A maximal-mixing scenario will certainly have larger m h than a no-mixing scenario at thesame M S . But note that the reasoning of the previous paragraph applies to afixed mediation scale Λ. If the maximal-mixing scenario has much larger Λ thanthe no-mixing scenario – e.g. if we take the former to represent supergravityand the latter low-scale gauge-mediated Susy breaking (GMSB) – then it willbe more unnatural not only due to the large A t but also due to large amountof RG running, i.e. the large logarithm in Eq. (2.3). Taking Λ = 10 GeV and10 GeV as representative of these two cases, the former will have an unnatural δm H u term ∼
25 times larger; the two should thus compare their Higgs masseswith GMSB having stops (roughly √
25 times) heavier than supergravity forsimilar fine-tuning, changing perhaps qualitatively the result of a comparisonat fixed M S , c.f. [57, 69]. The optimal situation in the MSSM is clearly (nearly)maximal mixing with low-scale mediation: [90] and [91] realised this with theintroduction of large A t terms into GMSB via Higgs-messenger superpotentialcouplings.Investigating whether minimal stop masses coincide with optimal natural-ness, as done here for the MSSM, is particularly important for extensions ofthe MSSM. Introducing a new particle which couples strongly to the Higgs, inorder to boost the latter’s mass without heavy stops, is naively good for nat-uralness. However pushing this new coupling as far as it will go can easily beimagined to introduce a new source of tuning somewhere in the theory (analo-gous to the effect of large A t on δm H u considered here). Exactly this effect inthe Next-to-Minimal Supersymmetric Standard Model (NMSSM) was consid-ered in [92]: the stop masses and mixing were kept small, and the naturalnessimplications of NMSSM-specific contributions to m h calculated. These were42ound to be a tuning of the lighter scalar’s couplings to hide it from currentcollider constraints, in the case where the 126 GeV Higgs is the second-lightestscalar; and worse tuning still when it is the lightest scalar, to undo the push-down effect of level-repulsion between mass eigenvalues. Nevertheless thesetunings are at the level of 1 part in 5, making the NMSSM more natural whencompared to the inescapable tuning of 1 part in ∼
100 in the MSSM to obtain m h ≈
126 GeV [54]. 43
The Higgs In The NMSSM
This section is based on my work [93] done in collaboration with Daniel Al-bornoz Vasquez, Genevi`eve B´elanger, C´eline Bœhm, Jonathan Da Silva andPeter Richardson; the text has been mostly re-written.
The NMSSM (see [94] for a review and original references) contains the samefield content as the MSSM – Table 2 – with the addition of a gauge-singletsuperfield S . The latter contains a neutral fermion – the singlino ˜ S , whichmixes with the four MSSM-like neutralinos to give χ i =1 ... ; together with aneutral scalar and pseudo-scalar, which mix with the doublet-like/MSSM-like h and H to give H i =1 , , , and with the MSSM-like A to give A , , respectively.A major motivation for this extension is to solve the µ - problem of theMSSM. From Eq.s (1.10) and (1.11) it is clear that | µ | cannot be consid-erably greater than the Higgs soft mass-squareds (particularly m H u ), or elseelectroweak symmetry will not be broken. However this is precisely what wewould expect in the MSSM as µ is supersymmetric (appearing in the superpo-tential) and hence its natural scale is the cutoff of the supersymmetric theory,which we are imagining might be the GUT scale or beyond. To tie the size of µ to the size as the soft terms one could appeal to the anthropic principle, butsince the latter may address the small size of the Standard Model Higgs mass without Susy, a primary motivation is lost. Instead one can have an effective µ term arise as the VEV of a new superfield, which only acquires a VEV dueto Susy breaking; in the NMSSM that new superfield is the gauge singlet S .Generally of course the explicit µ term is also present, and by introducing S there are two more dimensionful supersymmetric terms possible – a massand a tadpole for S – which also need to be suppressed well below valuescomparable to the cutoff scale in order to allow EWSB. W NMSSM = W MSSM , Yukawas + W Higgs − only with W Higgs − only = λSH u H d + µH u H d + χ F S + µ (cid:48) S + κS (3.1)The three dimensionful supersymmetric terms – µH u H d , χ F S and µ (cid:48) S – canbe forbidden with a Z symmetry, giving a scale-less superpotential only con-taining terms with three superfields. If this discrete symmetry (or any other) isexact down to the electroweak scale where it is spontaneously broken, cosmo-logically unacceptable domain walls would exist between regions of space withdifferent charges under the symmetry – bubbles [95]. It is therefore desirable tosuppress such terms with only an approximate symmetry (see the discussionin [94]).The Z symmetry forbidding dimensionful supersymmetric terms also for-bids three Susy-breaking terms: L / ⊃ − bH u H d + m (cid:48) S S + χ S S + c.c.. The soft44erms that are present and contribute to the Higgs potential at tree-level are − L soft , Higgs = m H u | H u | + m H d | H d | + m S | S | + ( λA λ H u H d S + κA κ S + c.c.) (3.2)The coefficients of these five terms, together with those of the two remaining Z -symmetric terms in W Higgs − only (that is, λSH u H d and κS ) define the sevenparameters in the Z -symmetric NMSSM Higgs sector: m H u , m H d , m S , A λ , A κ , λ and κ . The three scalar mass-squareds can be traded for the observed M Z , tan β , and the effective µ term µ eff ≡ λ (cid:104) S (cid:105) .The physical Higgs mass is boosted at tree-level, compared to the MSSM,by the coupling to the singlet: a purely doublet-like H has m H , tree ≤ M Z cos β + λ v sin β with equality in the decoupling limit. Vanishing λ isthe MSSM case, for which m h plummets as tan β →
0; for λ = M Z /v ≈ . m h = M Z independent of tan β ; and λ > .
52 is when we start seeing a boostbeyond what is possible in the MSSM: see Fig. 4. There is however an up-per limit on the size of λ at the low scale in order to have it not encountera Landau pole before the GUT scale – this upper limit is ∼ . β . Pushing into the low-tan β large- λ region, which isdesirable for an enhanced tree-level m h , therefore pushes the viability of pertur-bation theory. According to [96, 54] minimal tuning is achieved for maximallynon-perturbative λ (namely λ ≈ ∼
10 TeV. This waschallenged in [97] where, with a slightly different measure of fine-tuning, λ ∼ In [93] my collaborators and I analysed the Higgs and collider phenomenologyof two scans of NMSSM parameter space which had been performed in [98,99, 100] and explored in a predominantly astrophysical context. Details ofhow the scans were set up can be found in the original works, however I willsummarise here. • The key difference between the two scans was that the earlier one de-manded a lightest-neutralino mass m χ <
15 GeV, motivated by hints ofa signal in direct detection experiments [101, 102]. Since these did notmaterialise into more concrete observation this requirement was droppedfor the second study, which contains points with arbitrary m χ . In ourwork [93] we analysed the two cases separately, as m χ <
15 GeV requiresa delicate fine-tuning of parameters in order to be viable, and exhibitssome special features. The arbitrary m χ case was extended with anadditional exploration of the λ > . , tan β < signal strength (see the glossary) in theHiggs to diphoton channel – see Section 3.4.45
60 70 80 90 100 110 120 1 2 3 4 5 6 7 8 9 10 m H , t r ee / G e V tan βλ = 0 (MSSM) λ = M Z / v = 0.52 λ = 0.7 Figure 4: The maximum value (achieved in the decoupling limit) of the tree-level mass of a purely doublet-like H in the NMSSM: m h, tree = √ ( M Z cos β + λ v sin β ). A singlet-doublet coupling of strength λ ≈ . β ,however as tan β approaches 1 a much smaller λ is required. • A large number of independent ‘model’ parameters were defined at thelow scale, i.e. with no particular UV completion in mind. These param-eters were taken to be the gaugino masses M , M and M ; Higgs sectorparameters µ eff , tan β , λ , κ , A λ and A κ (as discussed in the previous sec-tion); flavour-blind soft masses for the left- and right-handed sleptons m ˜ L and m ˜ e ; common soft masses for the squarks of the first and second gen-eration ( M ˜ q , ) and third generation ( M ˜ q ), and a single non-zero trilinearcoupling, A t . For the m χ <
15 GeV study three further restrictions wereimposed: a common soft mass was taken for both ‘chiralities’ of sleptons( m ˜ L = m ˜ e ), squark masses were taken to be flavour blind ( M ˜ q = M ˜ q , ),and the gaugino mass-unification relation M = M was assumed. Thelatter was taken to reduce the number of free parameters knowing thatthe gluino does not play an important role in dark matter observablesfor light neutralinos. For the later analysis without the m χ <
15 GeVrequirement these three conditions were relaxed, increasing the param-eter space dimensionality by three. The spectrum and observables werecalculated from the model parameters using micrOMEGAs 2.4 [103] and
NMSSMTools 2 [104, 105, 106]. • Some experimental constraints were applied as simple pass/fail criteria,while for others a likelihood was calculated, giving a goodness of fit forthe model for that particular observable. A total likelihood (as the prod-uct of all of the separate likelihoods) was calculated and used to help steerthe Markov Chain Monte Carlo’s parameter-space exploration towardsareas in better agreement with the full set of observables. Contributing46actors to this likelihood were: – A comparison of the LSP relic density to the Wilkinson MicrowaveAnisotropy Probe (WMAP) observed value Ω
WMAP h = 0 . ± . < Ω WMAP calls for another type of particle to(partially) solve the dark matter problem, or else a modification ofgravity (e.g. [108]). – Dark matter direct detection limits from XENON100 [109], gammarays from dwarf spheroidal (dSph) galaxies probed by Fermi-LAT[110, 111] and the radio emission in the Milky Way and in galaxyclusters [112, 113]. – The anomalous magnetic moment of the muon: ( g − µ . – Constraints included within
NMSSMTools from b physics, and LEPand Tevatron Higgs and Susy searches (including invisible decaysof the Z ). – Theoretical constraints, namely the absence of Landau poles andunphysical global minima of the scalar potential.In [93] we studied the effect of applying further constraints to these scans.LHC searches for Susy were considered, in the form of the ATLAS Collabora-tion’s 1 .
04 fb − jets+ /E T search which I implemented and validated in [114].For the reader unfamiliar with such searches, the points in this paragraphmay become clearer after the discussions in Section 7. As the standard toolfor the calculation of Next-to-Leading Order (NLO) cross-sections in Susy – Prospino [115, 116, 117, 118, 119] – is restricted to the MSSM, constraints werederived at Leading Order (LO) unlike in Section 7. Without the
K-factor (seethe glossary), which is O (1 −
3) in the MSSM, these constraints are expectedto be slightly conservative. To derive them, events (see the glossary) were gen-erated with
Herwig++ 2.5.1 [120, 121] and analysed with
RIVET 1.5.2 [122].These limits are capable of excluding first and second generation squarks lighterthan 0 . − ∼ . singlino -like neutralino LSP, thesquarks and gluinos cannot decay to this LSP directly but must do via anintermediate particle, frequently the second-lightest (MSSM-like) neutralino.As noted in [123] this reduces the acceptance (see the glossary) into jets+ /E T search channels, as the extra step reduces the /E T and may result in leptons . Susy searches with leptons would then have greater sensitivity, but this typically doesnot compensate for the loss of sensitivity in the 0-lepton search [123]; at first glance astrange result since leptons are more striking objects over the quantum chromodynamics q ˜ χ ˜ χ A or H A or H˜ χ ˜ χ b ¯ bb ¯ b ˜ q ˜ q p miss T Figure 5: The jets+ /E T signal arising in the NMSSM with a singlino-like LSP,and a χ which is much lighter than the squarks and which decays to χ witha Higgs (‘ A or H ’).A third factor which I observed being of equal importance is that even if theintermediate state (between squark/gluino and LSP) decays into the LSP witha jet rather than a lepton, there will be greater alignment between the / p T andone of the jets, causing greater difficulty in meeting the angular separation trig-ger ∆ φ (jet , / p T ). For the m χ <
15 GeV scan, whenever the coloured sparticleswere light enough to be within reach of the LHC they were associated witha singlino-like ˜ χ . The jets+ /E T search has highly reduced sensitivity for theaforementioned reasons and excluded very few of the points. For the arbitrary m χ scan, however, the singlet sector particles were generally much heavier, sothat the LSP was not singlino-like. The MSSM-like ˜ q → q ˜ χ and ˜ g → qq ˜ χ decays then take place, resulting in the familiar limits m ˜ q (cid:38) . − m ˜ g (cid:38) . / p T and a jet prompted meto note in [93] the signal shown in Fig. 5. This arises in the case of a singlino-like χ (forcing squarks to decay instead via χ ) and a Higgsino-like χ whichis (a) able to decay to χ with a Higgs (the latter going to b ¯ b ) and (b) muchlighter than the squarks (making it boosted). The result is a / p T vector whichis sandwiched in between two jets, each consisting of two b quarks. In [124](Section 8 of this thesis) I showed the relevance of this topology to simplerfinal states and many more classes of models, and how it can be exploited toreconstruct mass peaks.The main constraint we were interested in investigating in [93] was com-patibility of the NMSSM with the first observation of the 126 GeV Higgs. Atthe time this had also been studied in [54, 61, 125, 126, 127] (and our pre-liminary results had appeared in [128]). Following the suggestion of [56] weconsidered 122 < m h / GeV <
128 as the favoured range, within experimentaland theoretical errors. Note that the observed Higgs h can correspond to H , H or both at once (also in principle H is possible, though we did not find aviable example). As well as having the correct mass, a candidate for the ob-served signal must also have the correct signal strengths. These quantities are (QCD) background, but when large /E T is replaced by modest + /E T and a modestly hardlepton, backgrounds with W bosons become relevant. µ , as the factor by which the expected signal should be multipliedin order for the signal plus background to best agree with the observed events,as a function of m h (see my discussion of blue-band plots in Appendix C).Proton collisions can produce a Higgs in different ways, however the mech-anism with the largest total cross-section (by a factor ∼
10) for m h ∼
126 GeVis the fusion of two gluons. If cuts (see the glossary) are imposed, they canhave different acceptances for the different production mechanisms and thuspreferentially select some more than others, possibly reducing the dominanceof gluon fusion. This is not the case for the cuts used in the three most sensitivechannels h → γγ, W W, ZZ , and so for the signal strength in these channels agood approximation is to consider the normalised production cross-section asbeing given by the normalised coupling to gluons (squared): σ prod /σ prod , SM ≈ g hgg /g hgg, SM , (3.3) R ggXX ≡ g hgg BR ( h → XX ) g hgg, SM BR ( h → XX ) SM , (3.4)and we take R ggXX as the signal strength in the XX channel, XX = γγ, W W, ZZ .As a loose criterion for sufficient visibility to correspond to the observed sig-nal, I separated out points with R ggγγ ≥ . σ agreement with the mostsignificant measurement – h → γγ as seen by the ATLAS Collaboration.To supplement the Higgs constraints hard-coded into NMSSMTools , I in-terfaced the latter to
HiggsBounds [129, 130] for a further thorough check ofexisting Higgs-sector limits (see also Section 7.2). The code for doing this isavailable at [131]. m χ < GeV
Firstly we considered the broader case of χ as only a partial contributorto the observed relic density: 10%Ω WMAP h < Ω χ h ≤ Ω WMAP h (with thelower bound admittedly somewhat arbitrary). In Fig. 6 I plot the distributionof masses and diphoton signal strengths for H . In our scan H is most oftendoublet-like, i.e. with couplings like h in the MSSM, and thus values of m H extend roughly down to the LEP limit and up to 130 −
150 GeV. The fourcolours are explained in the caption; black points are good candidate points –at least one Higgs has mass ∈ [122 , R ggγγ ≥ .
4. Fig. 6 showsthat this Higgs is almost always H in our scan, and for a handful of points itis H (visible as black points with m H / ∈ [122 , m χ <
15 GeV, at least one extra decay mode is open to the candidate Higgsof mass ∼
126 GeV: h → χ χ . Furthermore, sufficiently efficient annihilationof such light neutralinos for acceptable relic density is not found to be possiblewithout a resonant s -channel scalar A or pseudo-scalar H [98]. Hence either49igure 6: Diphoton signal strength R ggγγ as a function of the mass of H inthe scan with m χ <
15 GeV. Red points are ruled out either by jets+ /E T constraints (only three points) or by HiggsBounds 3.6.1 ; other colours passthese constraints. Green points have no scalar with mass ∈ [122 , H and/or H ) but with R ggγγ < .
4, andblack points have such a scalar with R ggγγ ≥ . A or H has mass ∼ m χ (cid:46)
30 GeV, as shown in Fig. 7, and a Higgs-to-Higgsdecay is also open.In Fig. 8 I plot only the (previously black) good candidate points, and showthe effect on the diphoton signal strength of competing beyond the StandardModel (BSM) decays h → χ χ , χ χ , A A , H H . One sees R ggγγ ≈ − BR ( h → BSM), showing that the presence or absence of these decays is thedominant factor controlling signal strength. In Fig. 9 I show how the branchingratios for these decays are distributed amongst the good candidate points.As an aside it is interesting to note that a light scalar or pseudo-scalar x ,with reduced couplings to standard model particles (to explain current non-observation) but into which the Higgs can decay, is seen here in the NMSSMbut is a widely motivated phenomenon. For example a symmetry of the Higgspotential explicitly broken by a small term in the Lagrangian will give a lightdegree of freedom strongly coupled to the physical Higgs (see the more detaileddiscussion in [132]). The decay of x following h → x is model dependent, andmany possibilities have been considered. One example is a decay to gluonswhich suffers from a huge QCD background [133], however a boosted h → x → g fat jet has characteristic (and intuitive) jet substructure [134, 135]: the twohardest subjets are likely to have small masses (equal to m x ), they are likelyto have similar masses, and they are likely to be much harder than the thirdhardest subjet. Other possible decays include those to taus [136, 137], tausand muons [132], charm quarks [138], and generic combinations of hadronicand missing energy [139].A lighter Higgs could of course be produced directly, as well as via the decay50igure 7: Masses of the lightest scalar H and pseudoscalar A , in the scanwith m χ <
15 GeV. The colour coding is as in Fig. 6.
0 0.1 0.2 0.3 0.4 0.5 0.6 R gg γγ ( H ) BR(H -> χ ~ χ ~ , H H or A A ) Figure 8: Showing the reduction in diphoton signal strength from H → γγ competing with new BSM decays. Good candidate points (those with a scalarof mass ∈ [122 , R ggγγ ≥ .
4) in the scan with m χ <
15 GeV areshown. 51 F r equen cy BR(H -> χ ~ χ ~ ) 0.1 1 10 0 0.2 0.4 0.6 F r equen cy BR(H -> χ ~ χ ~ ) F r equen cy BR(H -> H H ) F r equen cy BR(H -> A A ) Figure 9: Unit-normalised distributions of the different BSM decay channelsfor good candidate points (those with a scalar of mass ∈ [122 , R ggγγ ≥ . m χ <
15 GeV.52f the 126 GeV Higgs. The substantial singlet component necessary to escapeexisting constraints suppresses the coupling to Standard Model particles. Inthe NMSSM (and other type II two-Higgs-doublet models [140]) couplings toleptons and down-type quarks are enhanced by tan β , possibly enhancing theassociated production of the lighter Higgs with a b quark, followed by a decayto 2 τ . In this scan however we found an H bb coupling equal to the StandardModel value at most.Finally we checked the effect of requiring that points in the scan have arelic density within 1 σ of the WMAP observed value (rather than merely notexceeding it). This depleted the density of points surviving all constraintsapparently uniformly, without causing any further noticeable correlations inthe Higgs signals. m χ The condition m χ <
15 GeV required a fine tuning of parameters to beviable, as I have mentioned, and forced the lightest scalar or pseudo-scalarto have mass ∼ m χ (and thus necessarily be singlet-like). The scan withoutthis condition typically gave rise to χ considerably heavier than 15 GeV,which can annihilate efficiently through exchange of a Z or light slepton, andtherefore do not require (part of) the singlet sector to be light for acceptablerelic density. Indeed in this scan the singlet sector is generally heavier thanin the previous one, and H rather than H is usually the candidate withmass ∈ [122 , χ as simply a contributor to the observedrelic density, now without a lower bound: Ω χ h ≤ Ω WMAP h . In Fig. 10 Ishow plots made by my collaborator Daniel Albornoz Vasquez illustrating thedistribution of diphoton signal with mass, for all such points. Constraintsfrom HiggsBounds 3.6.1 have been checked, those from jets+ /E T searcheshave not (yet). Very strong enhancement of the H diphoton signal strengthis seen, though mostly for masses below ∼
126 GeV. For both H and H amodest enhancement O (2) is seen in the favoured mass range. This is possiblewhen the singlet scalar eigenstate is close in mass to the lightest doublet-likescalar, such that the singlet nature of the former partially mixes into the latter,suppressing the largest Higgs decay width Γ h → b ¯ b and permitting an increasedbranching ratio to two photons as explained in [141, 125].Such a mechanism for enhancing the diphoton signal strength is weaklymotivated by (statistically insignificant) excesses observed particularly by theATLAS Collaboration – currently at the level 2 . σ [142, 143] – and previouslyby the CMS Collaboration (no more – [144]). Let us understand it in greaterdetail. With the matrix S ig relating the gauge eigenstate scalars g = H u , H d , S R ggγγ as a function of the mass of H (left panel) and of H (right panel), in the scan with arbitrary m χ . In theleft panel most points are on top of one another at high mass and high R ggγγ .The colour coding is as in Fig. 6 except jets+ /E T constraints are not checked.Plots by Daniel Albornoz Vasquez.to the mass eigenstates H i =1 , , , the modified couplings are at tree-level: g H i bb g H i bb, SM = S id cos β , (3.5) g H i V V g H i V V, SM = S id cos β + S iu sin β (3.6)The latter is relevant because the decay of a Higgs to two (massless) photons ispurely loop induced, dominantly through a W loop, thus strongly correlatingthe coupling to γγ with the coupling to W W . If the coupling to
W W isdecreased exactly in line with the coupling to bb , the diphoton signal strengthwill remain Standard Model-like (unless new light charged particles enhancethe coupling of the Higgs to photons). Comparing Eq.s (3.5) and (3.6) wesee that the diphoton signal strength is not enhanced when cos β = 1, whichmeans the bottom-type Yukawas are exactly Standard Model-like; mixing asinglet with the doublet state then depletes all couplings uniformly, as wouldhappen in the Standard Model, preventing modification of branching ratios.We see that non-trivial mixing between the two doublet states ( H u and H d ) isrequired in addition to mixing with the singlet.As for the scan with m χ <
15 GeV, we checked the effect of requiring χ tomake up all of the WMAP observed relic density (within 1 σ ). This eliminatedall of the points in our scan where the diphoton signal strength was appreciablyenhanced, 1 (cid:46) R ggγγ (cid:46) R ggγγ of 1 . m S = ( κµ eff /λ )( A κ + 4 κµ eff /λ ) [145, 94], by virtue ofa small µ eff (cid:46)
200 GeV. This leads to some amount of Higgsino component in χ , allowing it to annihilate more efficiently and reducing the relic density (inour case always below Ω WMAP h ). Nevertheless in [63] an enhanced diphotonrate was found in the NMSSM with a WMAP-compatible relic density; note54 R gg γγ ( H ) m χ / GeV R gg γγ ( H ) (m H - m χ - m χ ) / m H Figure 11: Diphoton signal strength plotted against m χ and against a dimen-sionless measure of the extent to which the h → χ χ decay is off shell, namely( m h − m χ − m χ ) /m h . Points from the scan with arbitrary m χ , passing allconsidered constraints and with 122 < m h / GeV <
128 are shown.that small µ eff helps to make the singlet scalar lighter, but is not the only wayof doing so.For these WMAP-saturating points, and further specialising to those with122 < m h / GeV < /E T constraints as mentioned in Sec-tion 3.2. This eliminated all points with m ˜ q (cid:46) . − m ˜ g (cid:46) . m χ ,and against a dimensionless measure of the extent to which the h → χ χ decay is off shell: ( m h − m χ − m χ ) /m h . The cause of R ggγγ < A restricted application of LHC Susy search limits was chosen as they are computa-tionally very expensive to check, requiring the generation of at least 10 hadron-level eventsfor each model point. Doing this for exponentially large numbers of model points shouldbe discouraged for environmental reasons, especially when there is questionable physicalinsight to be gained by doing so. Examples include model points which can be discardedas physically uninteresting for some other reason, and model points with similar branchingratios and kinematics to the simplified or constrained models for which the cross-sectionlimits have already been interpreted as mass limits. art III Gauge-Mediated Susy Breaking
As mentioned in Section 1.3, Susy breaking must occur in a hidden sectorand be mediated to the visible sector (containing the MSSM), with the threemain candidates for the mediator being gravitational interactions, extra di-mensions, and gauge interactions. In this part of the thesis I will discuss somephenomenological aspects of the last of these cases – gauge-mediated Susybreaking (GMSB). R Symmetry And Susy Breaking
One of the relations of the Susy algebra is that the Hamiltonian is a positivesemi-definite sum of the fermionic generators. This means that, for a Susytheory, the vacuum | (cid:105) has unbroken Susy if and only if it has vanishing en-ergy; Susy is broken if and only if the vacuum energy is strictly greater thanzero. To describe our universe with Susy (which is necessarily broken) we thenmust ensure either that the RHS of Eq. (1.6) is greater than zero ( F - termbreaking ) or that the RHS of Eq. (1.7) is greater than zero ( D - term breaking ).Phenomenological problems with the latter have concentrated most efforts onthe former, with non-vanishing F -terms widely considered to be necessary (notjust sufficient). R symmetry is a symmetry of the Lagrangian but not the superpotential:it is the continuous symmetry θ, dθ, W (Φ) , L → e iα θ, e − iα dθ, e iα W (Φ) , L (4.1)Under this symmetry the superpotential has charge 2: we write R ( W ) = 2.In [146] a deep connection between R symmetry and dynamical F -term Susybreaking was established for generic, calculable theories. Dynamical
Susybreaking models are those in which none of the mass scales associated withSusy breaking are put in by hand – all are generated by dimensional trans-mutation, being of the form M P e − a/g , with a = O (4 π ). Calculable theoriesare those with a limit in which (asymptotically free) gauge dynamics can beintegrated out at some scale, leaving an effective theory with only chiral super-fields and no gauge fields (or gauge fields which do not couple to the remaininglight chiral fields). These chiral superfields Φ i =1 ...N can then be charged under global symmetries, and there are exactly three possibilities. • There are no global symmetries. The superpotential W is a functionof N variables; the vanishing of the RHS of Eq. (1.6) amounts to N constraints for the existence of a supersymmetric vacuum.56 There is no R symmetry but there is a non- R symmetry, with l gener-ators. W must be a sum of terms with zero charge, and so must be afunction on N − l variables; a vanishing RHS of Eq. (1.6) thus gives N − l constraints (with l constraints trivially satisfied). • There is an R symmetry.If the superpotential is generic – that is, all terms not forbidden by symmetriesare non-zero – then in the first two cases a solution can be found and thevacuum is supersymmetric. An R symmetry is thus necessary for Susy breakingin the true vacuum. This does not yet show, however, that Susy breakingis possible even then. Now consider that the R symmetry is spontaneouslybroken, with Φ (say) getting a VEV, and R (Φ ) = q . Since R ( W ) = 2 wecan write W = Φ /q f (Φ (cid:48) i =2 ,...N ), where Φ (cid:48) i = Φ i / Φ q i /q and f is an arbitraryfunction of these N − N constraints on N − R symmetry is thus sufficient for Susy breaking. Theseseparate necessary and sufficient conditions are together called the Nelson-Seiberg Theorem.Closely related to the Nelson-Seiberg Theorem is the theorem of Shih [147]for O’Raifeartaigh models of Susy breaking [148] with a single pseudo-modulus .This states that a sufficient condition for Susy breaking is an R symmetry anda least one field with an R charge not equal to either 0 or 2, under all con-sistent charge assignments (including e.g. mixing of the U (1) R with anotherglobal U (1)). Recently a stronger version of the Nelson-Seiberg Theorem hasbeen proposed [149]: for a calculable theory with a generic superpotential, anecessary and sufficient condition is that there are more fields with R charge2 than with R charge 0 under all consistent charge assignments.An important ramification of the Nelson-Seiberg Theorem is that it pushesus towards metastable Susy breaking (see e.g. [150]). If the R symmetry is bro-ken spontaneously, there is an exactly massless NGB – the R axion – which isruled out by astrophysical and experimental bounds [151, 152]. If the R sym-metry is broken explicitly, the R axion becomes a permissible pNGB; but sincethe Lagrangian is not R symmetric there is a supersymmetric vacuum. As theexplicit breaking tends to zero, however, the supersymmetric vacuum ‘movesout to infinity’ away from a local minimum of the potential at finite fieldstrengths (which should exist for metastability), making the latter arbitrarilylong-lived. The ISS model of [153] is the prototypical model of this sort. A modulus is a direction in field space which is classically massless; a pseudo-modulus isa modulus which becomes massive in the quantum field theory due to radiative corrections. .2 The Gauge Mediation Parameter Space General Gauge Mediation (GGM), defined in [73], is the collection of all modelswhere the Susy-breaking hidden sector decouples from the supersymmetricMSSM as the Standard Model gauge interactions are turned off. GGM allowsup to six parameters Λ
G,r , Λ
S,r ( r = 1 , ,
3) for specifying the gaugino andsfermion masses at some high scale characterising the hidden sector: M λ r = α r π Λ G,r , (4.2a) m f = 2 (cid:88) r =1 C ( f, r ) α r (4 π ) Λ S,r , (4.2b)where C ( f, r ) is the quadratic Casimir of the matter representation f underthe gauge group r , and a GUT normalisation of α is used: g = √ (5 / g (cid:48) . TheΛ parameters are all-order correlators of the (possibly strongly coupled) hiddensector. GGM was further developed in [154], in which the following remark isrelevant: “the fact that the gaugino masses are complex . . . implies that GGMdoes not solve the Susy CP problem. So additional mechanisms (such as anR-symmetry as in [155], or having the hidden sector be CP invariant) mustbe invoked to explain why the gaugino masses are real . . . we will assume thatsuch a mechanism is at work and only consider CP invariant theories, so thatthe parameter space of GGM spans R .” Also in [154] is an existence prooffor the possibility of fully spanning this six-dimensional model space.The relations (4.2) allow exploration of GGM phenomenology in blissfulignorance of the hidden sector dynamics, by scanning through Λ G,r , Λ
S,r . It isinstructive however to consider concrete models with more restricted parameterspaces, for which the Λ parameters can be calculated. The simplest GMSBsuperpotential consistent with gauge-coupling unification is W = λX ˜ΦΦ, withthe ˜ΦΦ messenger pair in the vector-like 5 ⊕ ¯5 representation of the SU(5) gaugegroup. All information about Susy breaking relevant to the visible sector isparameterised in the (gauge-singlet) spurion superfield X which, due to someunknown but ideally dynamical Susy breaking in the hidden-sector, acquiresa VEV (cid:104) X (cid:105) = M + θ F . The gaugino (sfermion) masses that result can becalculated either explicitly with one-loop (two-loop) diagrams [156, 157, 158]or more simply using wavefunction renormalisation [159]; in the language ofthe Λ parameters of Eq. (4.2) they are W = λX ˜ΦΦ = ⇒ Λ G = Λ S = FM , (4.3)where a suppressed gauge-group index r on a Λ parameter implies a commonvalue across the gauge groups. Eq. (4.3) is corrected by terms of higher order in F/M ; in practice we often consider this dimensionless parameter to be small,so that the effective theory below the scale M has only soft Susy-breakingeffects . With the superpotential Eq. (4.3) the mass given to the messenger pair ˜ΦΦ is M for SU (5). Only up to fourindependent scales will be obtained: we always have Λ G, = Λ G, + Λ G, andΛ S, = Λ S, + Λ S, according to the relative hypercharges of the doublet l and triplet q components of the SU (5) fundamental when decomposed onto SU (3) × SU (2) × U (1).As discussed in many places, such as [160, 161, 155, 162, 163] (this list isnot selective), one can generalise (4.3) with (a) multiple ⊕ ¯ messengers, (b)multiple spurions (or equivalently supersymmetric messenger mass terms), and(c) doublet-triplet splitting, i.e. independent Yukawa couplings for the doublet l and triplet q components of the . I summarise these 2 × × • Multiple messengers . Each messenger appears in the one-loop gauginomass diagrams and the two-loop sfermion mass-squared diagrams, andso contributes additively to Λ G and Λ S . Therefore generalising Φ → Φ i =1 ...N , with the Yukawa coupling λ becoming a matrix in messengerflavour space λ ij , has the effectΛ G = Λ S = FM Φ → Φ i =1 ...N −−−−−−−→ Λ G = N (cid:88) i =1 FM , Λ S = N (cid:88) i =1 (cid:18) FM (cid:19) , (4.4)provided we can diagonalise λ ij giving a basis in which the messengersare independent (more in this later). • Multiple spurions . In Eq. (4.3), note that
F/M is really λF/λM , sincethese dimensionful parameters only occur in conjunction with the Yukawacouping in the superpotential. Thus generalising to multiple spurions, λX → λ a X a , replaces λF/λM by λ a F a /λ b M b in the soft masses. Unless F a /M a (no sum over a ) is independent of a , the Yukawa coupling nolonger cancels in this ratio. • Doublet-triplet splitting . Even if the Yukawa couplings of l and q tothe spurion are equal at the GUT scale, as would be required by gauge the fermion and √ ( M ± F ) for the two scalar degrees of freedom; note that this symmetricSusy breaking obeys Eq. (1.9) as it is a tree-level effect. Thus F < M is necessary to avoidtachyonic messengers, and F (cid:28) M avoids a hard Susy breaking theory where one messengerscalar m φ = F + M has been integrated out but the other m φ = F − M has not. Susybroken only softly is necessary for a wavefunction renormalisation calculation [159]. In [154] it is shown that decomposing the total messenger sector into separate irreps withrespect to the Standard Model SU(3) × SU(2) × U(1), each irrep contributes additively to Λ
G,r and Λ S,r , in proportion to its Dynkin index with respect to gauge group r . Contributinglinearly to Λ G,r but in quadrature to Λ
S,r means a single irrep can result in up to two freeparameters; six independent Λ
G,r , Λ S,r therefore requires at least three different irreps, forexample as in the representation of SU (5). The fundamental has only two: q and l . λ becomes λ in the expressions for Λ G, and Λ S, , and λ inthe expressions for Λ G, and Λ S, .Table 3: Manifestly SU(5) symmetric messenger ˜ΦΦ1 × ( ⊕ ¯ ) N × ( ⊕ ¯ )1 × X W = λ X ˜ΦΦ W = λ ij X ˜Φ i Φ j Λ G = Λ S = FM Λ G = N FM , Λ S = √ N FM N X × X W = λ a X a ˜ΦΦ W = λ aij X a ˜Φ i Φ j Λ G = Λ S = λ a F a λ b M b ? −→ (cid:80) i λ ai X a ˜Φ i Φ i Λ G = (cid:80) i λ ai F a λ bi M b , Λ S = (cid:80) i (cid:16) λ ai F a λ bi M b (cid:17) Table 4: Doublet-triplet-split messenger ˜ ll, ˜ qq × ( ⊕ ¯ ) N × ( ⊕ ¯ )1 × X W = λ X ˜ ll + λ X ˜ qq W = λ ij X ˜ l i l j + λ ij X ˜ q i q j Λ G = Λ S = FM Λ G = N FM , Λ S = √ N FM N X × X W = λ a X a ˜ ll + λ a X a ˜ qq W = λ a ij X a ˜ l i l j + λ a ij X a ˜ q i q j Λ G, = Λ S, = λ a F a λ b M b ? −→ (cid:80) i λ a i X a ˜ l i l i + λ a i X a ˜ q i q i Λ G, = Λ S, = λ a F a λ b M b Λ G ; r =2 , = (cid:80) i λ ar ; i F a λ br ; i M b , Λ S ; r =2 , = (cid:80) i (cid:16) λ ar ; i F a λ br ; i M b (cid:17) Any matrix can be diagonalised by a bi-unitary transformation; rotating˜Φ i and Φ j independently we can always have diagonal λ ij X = λ ij ( M + θ F ).For multiple spurions X a with non-universal F a /M a (no sum), λ aij M a is notaligned in messenger flavour space with λ aij F a , and so each requires a differentbi-unitary transformation. Said differently, the required rotation in messengerflavour space must involve θ and thus not commute with Susy transformations.Such a rotation can be performed but it will necessarily mix the scalar andfermionic components of the messengers differently in the K¨ahler potential,60nd so one cannot integrate out each messenger separately . This encouragesrestriction of the couplings λ aij to diagonal form λ ai , in which case the simplelogic of the multiple messenger and multiple spurion bullet points precedingthis paragraph suffices to calculate the tricky case of multiple messengers and multiple spurions.Restriction to a diagonal messenger-spurion coupling is also necessary ingeneral, as noted in [162], to suppress potentially tachyonic hypercharge D -term contributions to sfermion masses. This is because a diagonal form ensuresthe existence of a messenger parity , defined in [160], where the messenger sectoris unchanged at tree level by Φ i , ˜Φ i , V → ˜Φ i , Φ i , − V (with V a gauge field).This parity forbids the troublesome hypercharge term. It is an approximatesymmetry only, necessarily broken by the visible sector couplings with thegauge fields, but this is enough to make the hypercharge term arise safely atthree or more loops instead of at one loop.A different approach to calculating the soft terms for multiple messengersand spurions is taken in [155], which I will summarise below. One can rotatein the basis of spurions to give just one spurion that has an F -term, plus asupersymmetric mass matrix. Explicitly: λ aij X a = λ aij ( M a + F a θ )= ( F a F λ aij ) ( M + F θ ) + λ aij ( M a − F a F M ) ≡ λ ij X + m ij (4.5)The explicit spurion-relabelling symmetry of the LHS is implicit in the RHSdue to a freedom to shift an arbitrary amount of the supersymmetric part ofthe spurion into the supersymmetric mass matrix: λ ij M + m ij = λ ij ( M + M (cid:48) ) + ( m ij − λ ij M (cid:48) ) . (4.6)In the spurion basis defined by the rotation in Eq. (4.5), the authors of [155]assume a non-trivial R symmetry (i.e. R ( X ) (cid:54) = 0). This uniquely specifies thesplitting in Eq. (4.6), and picks out one spurion from amongst the X a as unique,since the charged and uncharged terms cannot mix; in this way the spurionre-labelling symmetry is seen to disappear from both the LHS and RHS ofEq. (4.5). (Note that if the R symmetry existed in the pre-rotation basis X a ,is might prevent rotation to the single-spurion form; an alternative to the ideaof rotation is of course a set up where there genuinely is just one spurion andsupersymmetric masses for the messengers.) For any λ ij and m ij consistentwith the R symmetry (which enforces λ ij = 0 if m ij (cid:54) = 0 and m ij = 0 if λ ij (cid:54) = 0)the soft masses are then calculated asΛ G = F ∂ X log det( λ ij X + m ij ) | X = M = nFM with n = 1 R ( X ) N (cid:88) i =1 (2 − R (Φ i ) − R ( ˜Φ i ))) , (4.7a) Thanks to Alberto Mariotti for pointing this out. S = 12 | F | ∂ ∂X∂X ∗ N (cid:88) i =1 (cid:0) log |M i | (cid:1) | X = M (4.7b)where M i is the i th eigenvalue of M ij = λ ij X + m ij . I add to Eq. (4.7b)only the comment that it is necessary for the interest of the setup that thediagonalisation of M ij does not respect the R symmetry. Otherwise, eacheigenvector M i is a state of definite R charge, with an eigenvalue which iseither a coupling to the spurion or a supersymmetric mass; there are n of theformer and N − n of the latter kind of eigenvector. Eq. (4.7b) then evaluatestrivially to Λ S = n (cid:0) FM (cid:1) here as it must, matching Λ G = n FM to give nothingbut minimal GMSB with n messengers and one-spurion.For doublet-triplet split EOGM, independent Λ S, and Λ S, are found:Eq. (4.7b) with M i replaced by M i and M i respectively. Λ G, and Λ G, however remain constrained to be equal based on the assumption of the dou-blets’ R charges matching those of the triplets – they thus share a value forthe quantity n in (4.7a). 62 The Role Of The Messenger Scale, Sum RulesAnd RG Invariants
This section is based on my works [165, 166] done in collaboration with J¨orgJ¨ackel and Valya Khoze; the text has been mostly re-written. Unlike thoseworks, here I use a GUT normalisation for the U (1) gauge coupling: g = √ (5 / g (cid:48) . The GGM relations for the soft masses, Eq. (4.2), hold at some scale charac-terising the hidden sector. For models with a single (or several degenerate) ex-plicit messenger(s) of supersymmetric mass M mess , coupled to a Susy-breaking F -term with F (cid:28) M , that scale is unambiguously M mess : the scale atwhich all messenger degrees of freedom are integrated out. Consider holdingall high-scale parameters constant, but changing the scale M mess at which theyset the soft masses. Since this changes the amount of running required to reachthe low scale, and all MSSM soft-mass beta functions are non-zero, then triv-ially the low-scale spectrum will change. From this point of view, M mess is anadditional parameter impacting upon the low-scale GGM spectrum togetherwith the six Λ parameters.Let us ask a more subtle question instead. If varying M mess with constant Λparameters changes the low-scale spectrum, can this effect be entirely absorbedinto appropriately varying Λ parameters? If this were the case, M mess wouldmerely parameterise a one-dimensional family of models, Λ( M mess ), with iden-tical low-scale spectra (neglecting the precise value of the sub-GeV gravitino).The answer to this question lies in sum rules and RG invariants (RGIs).In GGM, the five sfermion soft masses (set in a flavour-blind manner) aregiven in terms of three parameters, Λ S ; r =1 , , , and thus there are two sumrules [73], exact at M mess , which can be expressed asTr[ Y m ] = 0 , (5.1)Tr[( B − L ) m ] = 0 (5.2)After running to the low scale, these are expected to hold for the first twogenerations but not the third, as they are de-tuned under RG evolution onlyby the Yukawas [73]. Let me say this a little differently. Setting the Yukawas tozero, the running of the sfermion soft masses depends only on the three gauginomasses; there are thus two linear combinations of sfermion mass-squared betafunctions (which are always the same in the MSSM) that vanish. In GGM, thelinear combinations of beta functions which vanish are the same as the linearcombinations of mass-squareds which vanish at M mess , hence the preservationof the sum rules.In [165] I calculated the values of the sum rules at low scale resulting frommodels of GGM (allowing for non-zero b ( M mess )) with their spectra calculatedusing SOFTSUSY 3.1.6 [167] supplemented with GGM boundary conditions. I63ound them to hold to O (1%) for the first two generations and for the third-generation hypercharge sum rule; only the third-generation B − L sum rulewas broken strongly, by ∼ −
60% over the parameter space investigated.The goodness of the first two generation sum rules at lower scales allowsfor the meaningful definition of Λ parameters which run , as opposed to beingdefined at the high-scale only – five running mass-squareds continue to bedescribed by three running parameters – Λ
S,r ( Q ) at a scale Q . These aredefined up to arbitrary additions of the two vanishing sum rules; an intuitivechoice can be arrived at in the following way. Amongst the one-loop RGIsof the MSSM (setting the Yukawas of the first two generations to zero) listedin [168] are the six shown in Table 5.RGI Definition GGM high-scale value I B r M r /g r Λ G,r π I M r M r + (cid:80) ˜ f D ( ˜ f , r ) m f, (cid:16) α r ( M mess )4 π (cid:17) (cid:0) Λ G,r + κ r Λ S,r (cid:1)
Table 5: Six one-loop RGIs of the MSSM, setting the Yukawas of the first twogenerations to zero, with r = 1 , , m f, a first-generation sfermion mass-squared, taken from [168]. Their values in GGM at ascale M mess are shown. D ( ˜ f , r ) and κ r are numerical coefficients whose values(suppressed here for clarity) can be found in that reference.The first three of these, I B r , show that the parameters Λ G,r are themselvesRGIs; they can therefore trivially be defined at lower scales by Λ
G,r ( Q ) = Λ G,r .Then since the I M r are constant, taking the expressions for their values in GGMat M mess but allowing the α r to run defines implicitly how Λ S,r ( Q ) must runto keep I M r constant. In other words: I M r = (cid:18) α r ( Q )4 π (cid:19) (cid:0) Λ G,r + κ r Λ S,r ( Q ) (cid:1) (5.3)An example of how the Λ S,r ( Q ) run is shown in Fig. 12.Where has defining running Λ parameters got us? Consider a taking a spe-cific GGM model defined by Λ S,r , Λ G,r and M mess , and calculating the runningof Λ S,r , Λ G,r down to a lower scale Q . Now define another GGM model whoseparameters are M mess equal to that scale Q , and Λ S,r , Λ G,r equal to the run-ning values just calculated. By construction, these two models will have thesame running Λ parameters at all scales below the lower messenger mass, andthus be indistinguishable; the effect of larger M mess / more RG running can beabsorbed into the Λ parameters (I am still neglecting Yukawa couplings – thisstatement will shortly be revised). I illustrate this with the cartoon Fig. 13.Similarly Fig. 12 can be reinterpreted, instead of showing the running Λ S,r ( Q )64 .0e+002.0e+044.0e+046.0e+048.0e+041.0e+051.2e+051.4e+051.6e+051.8e+05 5 6 7 8 9 10 11 12 13 14 Λ S,r / GeV log ( Q/ GeV)Figure 12: Running Λ S ; r =1 , , ( Q ) as described in the text, for a model withunified Λ S,r = Λ
G,r = 10 GeV, at the messenger scale M mess = 10 GeV. Λ S, is shown with red crosses, Λ S, with green squares and Λ S, with blue circles.for a single model, as showing Λ S,r ( M mess ) for a series of equivalent modelswith different M mess .The third generation sfermions (and Higgs scalars) however, have non-negligible Yukawa couplings. This is why their high-scale sum rules do nothold at the low scale, and why the I M r in Table 5 are RGIs for the first (orsecond) generation only, not the third. While the effect of gauge-group in-duced RG running can be absorbed into the Λ parameters as discussed, theYukawa-induced splitting of the third generation from the first two cannot(likewise for the splitting of m H u and m H d from m L ), as GMSB is by con-struction flavour-blind. Therefore an inescapable role of the messenger scaleis to control the amount of this Yukawa-induced splitting. To illustrate this Ichose one model to have unified Λ S,r = Λ G r = 10 GeV at a messenger scale M mess = 10 GeV, and defined a series of models with lower M mess and Λparameters chosen to make the models maximally equivalent (i.e. equivalentup to unavoidable Yukawa-induced splitting, as discussed), namely with Λ G r constant and Λ S,r ( M mess ) defined as shown in Fig. 12. I plot the resultingmasses of selected particles in Fig. 14.First generation sparticle masses in Fig. 14 stay constant – the effect ofchanging M mess has been fully absorbed into the appropriately chosen Λ S,r ( M mess )as promised. The gaugino masses also stay constant (only the gluino is shown;the bino and wino mix with the Higgsinos). Third generation sparticle massessplit from their first generation counterparts increasingly with higher M mess ;high tan β is chosen to show this effect maximally for the stau (likewise for thesbottom, not shown), at small tan β the stau and sbottom masses follow thoseof the selectron and sdown respectively. The Yukawa-sensitive heavy Higgs65 M mess Λ S,r ( M mess )Λ S,r Λ S,r QM (cid:48) mess Λ (cid:48) S,r ( M (cid:48) mess )Λ S,r
Figure 13: The running Λ S ; r =1 , , ( Q ) for two models with different messengerscales and different values of Λ S,r at the messenger scale, but with matchingΛ
S,r ( Q ) below the smaller messenger scale M (cid:48) mess .mass also varies.The goodness of the sum rules for the first two generations, and hencethe continued parameterisation of the five mass-squareds in terms of runningΛ S,r ( Q ), allows for a potential signature for unified Λ S,r ( M mess ) = Λ S ( M mess )models. If first/second generation superpartners were discovered, and theirmasses measured and found to satisfy the sum rules, then the low-scale Λ S,r could be calculated. Discovery and measurement of the masses of gauginos(complicated by the need to determine the mixing angles with Higgsinos) wouldthen determine the running Λ
S,r ( Q ) up to high scales. If these three quantitieswere observed to unify at a single scale, this would be suggestive of a unifiedΛ S GGM model with a messenger mass at the observed unification scale. Notethat unification of running Λ
S,r ( Q ) is a separate issue from gauge-couplingunification: one may happen without the other, and the unification scales areindependent. Explicitly, from the definition of the I M r in Table 5 and thedefinition of running Λ S,r ( Q ) in Eq. (5.3), the values of Λ S,r at a low scale Q low are determined by measurement asΛ S,r ( Q low ) = 16 π κ r α r ( Q low ) (cid:88) ˜ f D ( ˜ f , r ) m f, ( Q low ) (5.4)These can be extrapolated up to any higher scale Q by replacing Q low with Q throughout, of course; however knowing the one-loop running of the sfermionmass-squareds and gauge couplings this gives explicitlyΛ S,r ( Q ) = 16 π κ r α r ( Q low ) (cid:88) ˜ f D ( ˜ f , r ) m f, ( Q low ) + M r ( Q low ) (cid:18) − α r ( Q low ) α r ( Q ) (cid:19) (5.5)66
100 200 300 400 500 600 700 800 900 5 6 7 8 9 10 11 12 13 14 SelectronStauSupStopSneutrinoGluinolight Higgs h0heavy Higgs H0
Mass / GeV log ( M mess / GeV)Figure 14: The masses of selected particles for GGM models where the Λ
S,r are varied with M mess as shown in Fig. 12, with Λ G,r held constant at 10 GeV.‘Sneutrino’ refers to ˜ ν e .Note that if the Λ S,r really are unified at a scale M mess , then even if thelow-scale observables were measured perfectly there would still be some smallerror in the unification of the Λ S,r ( Q ) extrapolated from the low scale. This isbecause the RGIs of Table 5 are only RGIs at one loop; equivalently Eq. (5.5) isobtained only by integrating the one-loop beta functions. This can be consid-ered a source of theoretical error in the calculated unification, and is illustratedin Fig. 15 for an optimistic and a pessimistic case: the true unification (ob-tained by using the model parameters) is compared to the result of a calculationfrom the exact low-scale spectrum.In addition to the theoretical error we must consider the limited precisionwith which soft mass terms could be measured. In Fig. 16 I illustrate the effectof this uncertainty on the observed unification. Uncertainty in the runninggauge couplings is neglected; uncertainty in the relevant soft masses – thoseof the gauginos and the first generation spartners – is first assumed to be 5%(top panel of Fig. 16), and then 1% for m ˜ u , m ˜ d and 5% for all the others (lowerpanel). Squark mass measurements of O (1%) are not implausible at the LHCwith 300 fb − [169]. Unification in the former case is obscured by the errorswhereas in the latter case it is more or less visible, due to the sensitivity to thesplitting m u − m d , albeit with a few orders of magnitude uncertainty in thescale. The experimental uncertainties dominate the theoretical uncertainties– compare Fig.s 15 and 16. 67 .0e+002.0e+124.0e+126.0e+128.0e+121.0e+131.2e+131.4e+131.6e+13 2 4 6 8 10 12 14 Λ S,r / GeV log ( M mess / GeV) Λ S,r / GeV log ( M mess / GeV)
Figure 15: Running Λ
S,r in two models with the unification Λ
S,r ( M mess ) = Λ S .Λ S, is shown in red, Λ S, in green, and Λ S, in blue. Dashed lines showΛ S,r ( Q ) at and just below M mess calculated with the high-scale Λ S,r ( M mess )parameters: these unify exactly, highlighted with a black circle. Solid linesshow Λ S,r ( Q ) calculated from the exact low-scale spectrum, with imperfectunification shown in the red ellipse. The top panel has Λ G = 5 × GeV , Λ S =2 × GeV , M mess = 10 GeV; the bottom panel Λ G = Λ S = 2 × GeV at M mess = 10 GeV. 68
S,r / GeV log ( M mess / GeV)Λ
S,r / GeV log ( M mess / GeV)Figure 16: Reconstructed values of the running Λ
S,r from the relevant softmasses – those of the gauginos and the first generation spartners. In thetop panel these are all taken to be measured to 5%; in the bottom panel animproved precision of 1% is taken for m ˜ u and m ˜ d . The model has unifiedΛ S,r = Λ
G,r = 10 GeV at a messenger scale M mess = 10 GeV.69 art IV
Susy Searches At The LHC
Proton collisions at the LHC may produce Susy particles if they exist withmasses around the weak/TeV scale. The final state(s) they decay into willthen be enriched compared to the Standard Model , or said another way,signal plus background is greater than background alone. In order to havethis effect be as visible as possible, we typically try to find the point in phasespace (see the glossary) with the largest ratio of signal plus background tobackground, and focus on a region of phase space around that point which issmall enough to retain this large ratio, but large enough for it to correspond toa statistically significant number of events N when multiplied by our envisagedintegrated luminosity L : N = L × σ , where sigma has been integrated overthe chosen phase space.Once one has a rough idea of the region of phase space to focus on, furtherdiscrimination of signal plus background from background alone may be pos-sible if there is an observable (a quantity typically calculated from the finalstate four-momenta) which is distributed very differently in the two cases. Theclassic example is the invariant mass of the sum of the four-momenta of all theparticles into which a new particle decays: by definition this is a δ -functionfor the signal, smeared out both by the finite width of the particle and bydetector resolution, and some more broadly spread continuous distribution forthe background. It is thus highly desirable to calculate this observable notonly to know the mass of any particles that might be discovered this way, butbecause it is essentially the ideal way to discover their existence in the firstplace, by concentrating all of the signal to a highly visible bump on a smoothbackground.Mass reconstruction is generally not possible however when the particlewe’re searching for decays to neutral particles that are stable on a collidertimescale, as such particles are not detected. As mentioned in Section 1.1, for R parity conserving Susy the LSP is stable, and therefore it must be neutral forconsistent cosmology. Unless decays to the LSP are strongly suppressed, i.e thenext-to-LSP (NLSP) or NNLSP etc. is quasi-stable, any sparticles producedat a collider will decay to an invisible LSP inside the detector. Furthermore R parity ensures sparticles can only be produced in pairs, and so all events This is is not a given in quantum field theory of course – a cross-section is the square-modulus of a sum of amplitudes A , and these may interfere destructively. It is almost alwaysthe case however that the final states resulting from the decays of BSM particles are enhancedcompared to the Standard Model alone, i.e. that |A signal + A background | > |A background | . R parity conserving Susy, though see Section 8 in which the dream comestrue in the specific case of boosted decays.Without mass peaks we are often forced into cut and count style approaches:we identify the aforementioned roughly selected region of phase space whichis promising, ignore/cut the rest of phase space, and simply count events. Ifthe number of events observed closely matches the number predicted by thebackground alone, then any new physics which predicts considerably moreevents than this can be excluded. The problem with such analyses is that weneed precise knowledge of the background normalisation – the integral of thebackground differential cross-section over the selected phase space. To see thisnote that a even a modest error in the background differential cross-section,when integrated over a large phase space, could result in an uncertainty thatswallows the whole signal (which may live only in a certain part of the phasespace though we don’t know where). This is to be contrasted with the presenceor absence of a bump on top of a smooth function (the background), which isnot sensitive to an overall scaling of the latter.Cut and count analyses do however lead to exclusion limits which arestraightforward to re-interpret in other models: they set an upper limit, ata certain confidence level which is usually chosen to be 95%, on the cross-section that any new physics can contribute to the final states and region ofphase space that pass the cuts. Said more precisely the upper limit is on thequantity σ prod × A × (cid:15) , where σ prod is the production cross-section for newphysics, A is the acceptance, and (cid:15) is the efficiency i.e. the ratio of the num-ber of events that should pass the cuts to the number that actually do, due toimperfections of the detector. Writing this product as σ signal , new physics with σ limit /σ signal less than (greater than) one is excluded (allowed). The bound-ary/boundaries between such regions are often illustrated with Brazil-bandplots , which I explain in a manner suitable for novices in Appendix C.
To interpret a model-independent cross-section upper limit in a specific model,we need to know the model’s acceptance, and so we need the distributions inphase space resulting from production and decay of our new hypothesisedparticles. This problem is almost always tackled by using Monte Carlo eventgenerators to simulate large numbers of events (particle collisions in the mannerof the chosen collider – here the LHC). In a nut-shell, an event generatorsimulates a single event by considering what could happen each time particlesinteract (a typical event involves a huge number of interactions), and choosingone of the possibilities randomly but weighted by the physical probabilitywhich has been calculated using QFT. The user may force all of the eventsto go down a certain pathway on the branching tree of possibilities, which issensible if only a subset of all possible outcomes are to be studied; a scalingof final cross-sections is then appropriate to account for this. For example, in71tudies of decays of the Higgs to a given final state, the user would normallyforce the event generator to only decay the Higgs to that final state, andmultiply any final cross-sections obtained by the branching ratio for such adecay. A second example, more relevant here, is that to study decays of Susyparticles, one should force the event generator to simulate only events whereSusy particles are produced, rather than events where the colliding particlesinteract in any way possible!Most Monte Carlo event generators work with LO matrix elements (seethe glossary) for the production of Susy particles. NLO corrections can besubstantial however, with K-factors of 2-3 common for total cross-sections forproduction of coloured sparticles. A common approach is then to calculate σ signal = σ prod × A (neglecting non-unit efficiencies (cid:15) (cid:54) = 1 for the moment) atLO using event generation, and then to multiply by the global K-factor definedfor the total cross-section: K = σ NLO /σ LO . There are two ways in which thiscan fail as an approximation for σ signal at truly NLO. • High phase space dependence of K . It is clear that to convert σ signal fromLO to NLO we should multiply it by the K-factor associated with theregion of phase space we are concerned with, not the K-factor associatedwith an integral over all phase space. The latter is only acceptable if thetwo K-factors are similar. (As an example of when this is not the casesee the O (10 − p T tail of light slepton Drell-Yanproduction shown in [170].) • High dependence of both K and acceptance on particle-species . σ signal may receive contributions from the production and subsequent decay ofdifferent new particles; let me say via different channels for brevity. Ifdifferent channels have different K-factors and different acceptances, thenmultiplying the cross-section for the full signal (including all channels)by the total K-factor is not correct , as the following toy example shows.Say our signal receives contributions from electroweak production (‘EW’)and coloured production (‘C’) and we use a search strategy geared moretowards one than the other, say vetoing leptons, giving a higher accep-tance for the latter than the former: A C > A EW . The strength of thestrong interaction means that generically K C > K EW ; so we have chan-nels with different acceptances and K-factors. For concreteness let ustake the cross-sections to be σ C, LO = σ EW, LO , (6.1) K C = 3 , K EW = 1 (6.2) ∴ K tot = σ tot, NLO / σ tot, LO = 2 , (6.3)where we assume interference diagrams to be negligible so that σ tot = σ C + σ EW . Take the ( a priori unknown) acceptances to be A C = 1, Thanks to Peter Richardson for realising this in the context of our work, and to DavidGrellscheid for patiently explaining it to me. EW = 0. Monte Carlo simulation (at LO) of both channels simultane-ously will generate equal numbers of events for each (as σ C, LO = σ EW, LO ),and since A C = 1 , A EW = 0 the total acceptance is determined to be A tot = 0 .
5. The LO signal cross-section is then σ int, LO = σ tot, LO × A tot ;multiplying by the total K-factor one erroneously concludes σ int, NLO ! = σ tot, LO × A tot × K tot in general, (6.4)= 2 σ C, LO in this example. (6.5)This should be compared with what is clearly the correct answer, ob-tained by simulating the two channels separately in order to determinetheir individual acceptances, then weighting each by its individual K-factor: σ int, NLO = (cid:88) i = C,EW σ i, LO × A i × K i in general, (6.6)= 3 σ C, LO in this example, (6.7)where i denotes the channel. To see more clearly the difference betweenthe two approaches, and when they coincide, I re-write Eq. (6.4) explic-itly denoting was is meant by the total acceptance and K-factor in termsof the channel-by-channel definition of these quantities: σ int, NLO ! = σ tot, LO × A tot × K tot = (cid:32)(cid:88) i σ i, LO (cid:33) × (cid:18) (cid:80) i σ i, LO A i (cid:80) i σ i, LO (cid:19) × (cid:18) (cid:80) i σ i, LO K i (cid:80) i σ i, LO (cid:19) = ( (cid:80) i σ i, LO A i ) ( (cid:80) i σ i, LO K i ) (cid:80) i σ i, LO (6.8)The incorrect combined channel approach, Eq. (6.8), coincides with thecorrect channel-by-channel approach, Eq. (6.6), if there is a commonacceptance across all channels A i = A or if there is a common K-factoracross all channels K i = K .In the event generation of Section 7 I will not address the issue of phase spacedependent K-factors. I will however take into account the issue of non-universalK-factors and acceptances across different channels. Spontaneous breaking of a global supersymmetry results in a massless fermioncalled the Goldstino; this is just Goldstone’s Theorem applied to a brokenfermionic generator. In the context of gravity Susy must be promoted to alocal symmetry ( supergravity ) with the the spin- superpartner of the graviton– the gravitino – acting as the associated gauge field. Upon spontaneous73reaking of the local supersymmetry the massless gravitino eats the would-be Goldstino to obtain mass and longitudinal components: this is called the Super-Higgs mechanism , being a close parallel of the
Higgs- or more justlythe
Brout-Englert-Higgs mechanism [171, 172, 173] in which a spin-one gaugeboson obtains mass and a longitudinal component by eating a scalar. Thetransverse components of the gravitino interact only gravitationally, and so itsinteractions are dominantly just those of its Goldstino component (and thedistinction between gravitino and Goldstino is irrelevant, with both usuallydenoted ˜ G ). With Susy breaking coming from a single F -term VEV, (cid:104) F (cid:105) , theresulting gravitino mass can be estimated from dimensional analysis [174, 175]as m / ∼ (cid:104) F (cid:105) /M P , (6.9)as it must vanish in the limit of decoupled gravity M P → ∞ or restoredSusy (cid:104) F (cid:105) →
0. The other superpartners however receive masses ∼ (cid:104) F (cid:105) /M mess ,suppressed only by the mass M mess of the Susy-breaking mediator. In GMSBthis is much below M P , and so the gravitino is always the LSP.The decay of a sparticle to its Standard Model partner and the gravitinois suppressed, having decay width [6]Γ( ˜ X → X ˜ G ) = m X π (cid:104) F (cid:105) (1 − m X /m X ) (6.10)For GMSB with m ˜ X ∼ (cid:104) F (cid:105) /M mess , the suppressing factor is m X / (cid:104) F (cid:105) ∼(cid:104) F (cid:105) /M , which we typically take much less than one as discussed in Sec-tion 4.2. Therefore the dominant decay cascades (see the glossary) for R par-ity conserving GMSB will be those where each pair-produced particle decaysthrough a series of steps with gauge or Yukawa vertices until the NLSP isproduced, and only then does the decay to the gravitino occur, as no otherchannel competes. GMSB collider phenomenology then depends most impor-tantly on the NLSP species (i.e. the identity of ˜ X , and thus X , in ˜ X → X ˜ G )and whether or not this decay is sufficiently suppressed as to occur outside thedetector. In [176] the associated decay length for Λ S , Λ G models is given as L NLSP ∼ k G (cid:18)
100 GeV m NLSP (cid:19) (cid:18) √ Λ G M GeV (cid:19) − m , (6.11)where k G (the reciprocal of the parameter known as C grav ) quantifies the cou-pling of messengers to the Susy-breaking sector, which may be O (1) or muchsmaller. For Λ G ∼ GeV, messenger scales M mess (cid:38) GeV lead to L decay (cid:38)
10 m and a decay outside of the detector. When k G (cid:28) M mess may suffice. Detector-stable NLSPs are therefore a veryrealistic possibility. The NLSP species in these models can be crudely char-acterised as a bino-like χ for Λ G (cid:46) Λ S and a charged slepton for Λ G (cid:38) Λ S (specifically a stau for large tan β and slepton-smuon-stau co-NLSP for smalltan β ). Detector-stable charged particles have dedicated searches; a detector-stable χ however will manifest itself via missing energy.74 A Historical Detour: Comparing Early LHCDirect Searches To LEP Higgs Bounds
This section is based on my work [114] done in collaboration with David Grellscheid,J¨org J¨ackel, Valya Khoze and Peter Richardson; the text has been mostly re-written.
The final state in which we initially harboured the most hope for seeing Susyat the LHC was jets, /E T , and no leptons. • Jets . The particles with the largest production cross-sections for a givenmass are coloured particles, since QCD is the strongest interaction atthe weak/TeV scale and at the LHC we collide coloured objects. De-cays of coloured particles must produce other coloured particles (as thecolour quantum number is conserved), which will ultimately hadroniseto produce jets. • /E T . Susy is most often considered in the R parity conserving scenario, sothat the LSP is stable and constitutes a dark matter candidate. Providedthe full decay cascade is prompt (takes place inside the detector), theescape of the two LSPs gives rise to /E T . To give large /E T , the LSPsmust carry appreciable kinetic energy (and not be back-to-back in thetransverse plane), which requires the initially produced particles to beappreciably more massive than the sum of the masses of all final stateparticles. • No leptons . Backgrounds are either reducible or irreducible (see the glos-sary). The irreducible background for /E T consists exclusively of W and Z bosons decaying to neutrinos. A neutrino from a decaying W is accompa-nied by a lepton, unless it is ν τ and the associated τ decays hadronically.Thus for new physics decays that produce /E T without leptons, such asthe squark and gluino decays ˜ q → qχ and ˜ g → q ¯ qχ , a lepton veto willgenerally improve the ratio of signal to (reducible) background.Though discovery and exclusion do not have trivially connected statistics(e.g. the confidence level for discovery plus the confidence level for exclusiondoes not equal one!), a search strategy with strong potential for discovery willalso be able to set strong exclusion limits (unless the hypothetical particleactually exists of course) and vice versa, since both of these come if and onlyif the strategy has sensitivity to the hunted particle’s signal. Therefore whendata in the much anticipated jets, /E T and no-lepton (henceforth jets+ /E T )channels began coming in in earnest but without excesses over the background,strong exclusion bounds were set (and are continuing to be set of course).75n [114] I compared the strongest such bounds from 2011 – those obtainedin 1 .
04 fb − of data taken in 7 TeV collisions (and focussing on the ATLASCollaboration’s results [177]) – to the LEP limit on a Higgs with StandardModel-like couplings, m h > . S , Λ G , M mess and tan β (see Part III) and for theConstrained Minimal Supersymmetric Standard Model (CMSSM, see e.g. [6]).The sign of the µ parameter was taken to be +1 throughout. The results areshown in Fig. 17, with the exclusion contours projected on the squark-gluinomass plane . Explanations and interpretations of these bounds follow.For the GMSB case, as mentioned, the NLSP in these models is generallyeither a bino-like neutralino or a charged slepton, and may be quasi-stable ordecay promptly. Charged sleptons, when decaying promptly, produce a grav-itino (seen as /E T ) and a charged lepton; when stable a charged non-hadronictrack is seen to leave the detector. Promptly decaying bino-like neutralinosproduce a gravitino usually with a photon ( ˜ G + Z becomes competitive forheavy neutralinos). I focussed on a detector-stable neutralino as the NLSP,which gives large /E T that is not shared with a hard and distinctive lepton orphoton, and so is most appropriate for constraining by a jets+ /E T search. My route to obtaining the jets+ /E T constraints followed [181] closely : • Low-scale Susy spectra were calculated from high-scale model input using
SOFTSUSY 3.1.6 . • Signal events were generated with
Herwig++ 2.5.1 , separated into sam-ples of (a) pair-produced squarks and/or gluinos, (b) pair-produced slep-tons, neutralinos and/or charginos, and (c) one squark or gluino producedwith one slepton, neutralino or chargino. (c) typically has negligiblecross-section in these scenarios. (b) may have comparable cross-sectionto (a) but is largely eliminated by a lepton veto; given this differencein acceptance (and the obvious difference in K-factors), the argument ofSection 6 shows the importance in separating these production channels. • High-level objects (see the glossary) were defined from the final stateparticles using the
RIVET 1.5.2 analysis framework and
FastJet [183].The cuts for each search channel were also imposed with
RIVET ; thesecuts are detailed in Appendix D. The fraction of events passing the cutsfor each channel is the acceptance. Thanks to J¨org J¨ackel for providing the gradient of the RG-inaccessible region. I am grateful to David Grellscheid and Peter Richardson for providing the code necessaryfor
Prospino cross-section calculation and Susy event generation with
Herwig++ and
RIVET on The Grid [182]. m s q / G e V m g~ / GeVGGM, M = 10 GeV m s q / G e V m g~ / GeVGGM, M = 10 GeV m s q / G e V m g~ / GeV Jets + MET, tan β = 5Jets + MET, tan β = 45Higgs, tan β = 5Higgs, tan β = 10Higgs, tan β = 45tachyonic stau, tan β = 45RG-inaccessible region (a) Showing models of GMSB with a messenger scale M = 10 (10 ) GeV in the left(right) panel. For tan β = 45 there is a region where the stau becomes tachyonic whichcuts off the Higgs constraint contour. m s q / G e V m g~ / GeVCMSSM, A = 0 m s q / G e V m g~ / GeVCMSSM, tan β = 10 m s q / G e V m g~ / GeV Higgs, tan β = 5Higgs, tan β = 10Higgs, tan β = 45Jets + METRG-inaccessible region m s q / G e V m g~ / GeV Higgs, A = -m Higgs, A = 0Higgs, A = 2m Jets + METRG-inaccessible region (b) Showing the CMSSM. In the left panel I set A = 0 and vary tan β ; in the right panelI fix tan β = 10 and vary A . The jets+ /E T constraints are essentially independent oftan β and A in the CMSSM [179]. Figure 17: Bounds from the 2011 1 .
04 fb − search for Susy by the ATLASCollaboration in the jets+ /E T final state (solid lines), and from the LEP Higgsbound m h > . g and squarkmasses. Different colours show different values of tan β . The grey area istheoretically inaccessible [180] (unless tachyonic squarks are allowed at highscales). 77 NLO production cross-sections were calculated with
Prospino 2.1 ; anindication of theoretical uncertainty was given by varying the renormali-sation/factorisation scale by factors of 2 ± . Multiplying the cross-sectionby the acceptance defined our σ signal , which was compared to the exper-imentally determined σ limit . • The previous steps were done for the same plane in Susy model space asused by the experimental collaboration to present its own results. Witha close matching of the original and reproduced exclusion contours, thereproduced search channels can be considered validated. In the presentcase this agreement is shown in Fig. 18, giving confidence in my repro-duction of the jets+ /E T search channels (which we then made publiclyavailable as part of RIVET package: analysis
ATLAS 2011 S9212183 ). • With the previous step validating those that came before, the same pro-cedure can then be followed for other Susy models which were not con-sidered by the experimental collaboration.As discussed in Ref. [177] a localised detector failure caused a loss of jetenergy and “a loss of signal acceptance which is smaller than 15% for the mod-els considered”. This was taken into account here by including an efficiencyfactor (cid:15) = 0 .
85 in σ signal ≡ σ prod × A × (cid:15) , for four of the five different channels– all except the High Mass channel. The latter places the strongest demandson jet multiplicity (see the glossary) and p T ; as can be seen from Fig. 18 thelocation of the resulting exclusion contour has large uncertainties . Includingan efficiency factor in this way gave a minor improvement to my agreementwith the ATLAS Collaboration’s bounds in the same plane.To check exclusion by Higgs-based searches I used FeynHiggs 2.8.5 tocalculate Higgs-sector masses, couplings and cross-sections, and passed theseto
HiggsBounds 3.5.0beta to compare with a comprehensive set of exper-imental limits from LEP, the Tevatron and the LHC.
HiggsBounds returns σ signal /σ limit for the search channel with the highest statistical sensitivity: aratio greater than 1 indicates 95% confidence-level exclusion by at least onesearch. Note that this does not take into account a kind of look elsewhere effect that arises when one checks many different independent searches for the pres-ence of a single one indicating exclusion. A statistically thorough approachwould calculate the full combined likelihood for all of the observed experimen-tal results given the hypothesis of the model in question, and see whether thiscorresponds to a probability less than the critical value (nominally 0.05); thisis beyond the scope of HiggsBounds . This is because these demanding cuts mean the acceptance is a steeply rising functionof sparticle mass in the part of the plane where σ signal ∼ σ limit ; the product of acceptanceand production cross-section (with the latter always falling steeply with mass) is then fairlyflat around σ prod × A ∼ σ limit , and a systematic shift in this product (such as an efficiencyfactor or σ prod scale uncertainties) can move this flat region from having σ prod × A (cid:46) σ limit to σ prod × A (cid:38) σ limit , causing a large shift in the exclusion contour.
500 1000 1500 2000 2500 3000 m [GeV] m / [ G e V ] m [GeV] m / [ G e V ] m [GeV] m / [ G e V ] m [GeV] m / [ G e V ] m [GeV] m / [ G e V ] Figure 18: Validation of my reproduction of the five search channels defined inthe 1 .
04 fb − jets+ /E T search for Susy by the ATLAS Collaboration, shown inthe CMSSM plane of m and m / with tan β = 10 , A = 0. The colour ‘axis’shows σ signal in units of 1fb; the red line is the contour σ signal = σ limit . Theblack lines show the result of varying the renormalisation/factorisation scaleby factors of 2 ± . The exclusion contour derived by the ATLAS Collaborationis shown in yellow. The five search channels shown are, from left to right andtop to bottom: ≥ ≥ ≥ m eff >
500 GeV, ≥ m eff > High Mass . In the last panel the dashed white linesshow the ± σ contours for the ATLAS Collaboration’s expected exclusion.79iggs constraints were checked with HiggsBounds because, in general, lim-its are fundamentally set in terms of cross-sections. Their interpretation asmass limits is always a model-dependent process, requiring assumptions aboutcouplings and branching ratios. The LEP bound m h > . h has no BSM decay channels open. For the mod-els investigated here this was the case: the HiggsBounds result simplified to acomparison of m h with 114 . h → BSM possible, and/or with the LEP constraint being su-perseded by LHC Higgs results (which are incorporated in updated versions of
HiggsBounds ), this check will not in general be so trivial. It is facilitated by mycode available at [131] which links
SOFTSUSY to FeynHiggs and
HiggsBounds with automated plotting of the results.
The most basic point to note is that large masses for squarks and gluinosmake it easier to satisfy both jets+ /E T and m h constraints. Coloured spar-ticles being heavy helps to avoid jets+ /E T constraints because this decreasessignal production cross-section. Heavy squarks boost m h because of the stop’spositive contribution to δm h , and the fact that both GMSB and the CMSSMare flavour-blind (at the high-scale at least – this is slightly detuned by theStandard Model Yukawa couplings during RG running), tying the stop massto the general squark mass. Heavy gluinos may help boost m h because theycontribute to the negative running of A t from high to low scale, thus increas-ing the latter’s absolute value at the low scale (provided it is negative), andfor constant stop mass, m h increases with | A t | . This effect is clearly en-hanced for more RG running, and indeed we see that for heavy gluinos theLEP bound is less constraining for high messenger scale than low messengerscale: in Fig. 17(a) the dashed lines are flatter in the left panel whereas theyfall off for large m ˜ g in the right panel.In addition to the squark and gluino masses, which are dominantly set bythe mass parameters m , m / in the CMSSM and their counterparts Λ S , Λ G in these GMSB models, there are two further free parameters in each case:tan β, A in the CMSSM and tan β, M mess for GMSB. In the CMSSM casethese extra parameters were shown in [179] to have essentially no effect on thejets+ /E T constraints. As can be seen in Fig. 19, the parameters tan β, M mess Unless of course | A t | is increased beyond √ M S . In GGM however we start from A t = 0at a high scale, and boosting its low scale value with a large gluino mass and/or many decades (see the glossary) of running will simultaneously boost the low scale stop mass which alsoruns strongly with m ˜ g , generically forcing A t /M S < √ M S from A t [185]. This implies the existence of other vacua beside our own in which charge andcolour are broken, however cosmological constraints may allow such a situation [186].
800 1000 1200 1400 1600 1800 2000 400 500 600 700 800 900 1000 1100 1200 m s q / G e V m g~ / GeVGGM, M = 10 GeV, tan β = 5GGM, M = 10 GeV, tan β = 45GGM, M = 10 GeV, tan β = 5GGM, M = 10 GeV, tan β = 45CMSSM, A = 0, tan β = 10 Figure 19: Jets + /E T exclusion for the CMSSM, and for GGM for differentvalues of tan( β ) and the messenger scale M . The diagonals delimiting theexcluded areas in each case arise from one of the following two effects. Eitherthe NLSP (in GMSB models) changes from a neutralino to a stau, for whichjets+ /E T searches have no sensitivity (in the assumed context of a collider-stable NLSP), or we simply reach a region in the gluino-squark mass planewhich is theoretically inaccessible.do not affect the jets+ /E T constraints for GMSB ; furthermore we see that forgiven m ˜ q and m ˜ g , the constraints are virtually the same for GMSB as for theCMSSM. All of these observations can be understood as follows. The jets+ /E T signal in these models predominantly comes from the decays ˜ q → q + χ and˜ g → (¯ q ˜ q ( ∗ ) or q ¯˜ q ( ∗ ) ) → q ¯ qχ , which, together, depend on m ˜ q , m ˜ g and m χ . Forthe CMSSM and GMSB with a single Λ G , gaugino masses unify at the GUTscale and so the mass of the gluino fixes the mass of the bino gauge eigenstate;assuming a bino-like χ this also fixes m χ . Therefore for given m ˜ q and m ˜ g , the signal (and thus the constraints) looks the same regardless of otherparameters or even whether we are in the CMSSM or GMSB. Note that for constant Λ S , Λ G , changing the messenger scale changes the low-scale massof the squarks (though not the gluino), however as we have seen in Section 5 this effect canbe entirely absorbed into a messenger-scale dependent Λ S . More precisely then, I mean thatfor constant m ˜ q with Λ S varying appropriately, varying the messenger scale has no effect onthe constraints. We have m ˜ B = ( α /α ) m ˜ g ≈ m ˜ g /
6; if χ is instead Higgsino-like, this must be because | µ | < m ˜ B and so the assumption that m χ ≈ m ˜ B is an overestimate. However, if m ˜ q iscomparable to or greater than m ˜ g , a change of m χ from m ˜ g / q or ˜ g . If the squarks are somewhatlighter than the gluino, then χ actually being lighter than the estimate m ˜ g / | µ | making χ Higgsino-like) could potentially affect the squark decay – increasing visibleand missing energy, and thus σ signal . Note that with gaugino mass unification the gluinomass also fixes the wino mass, which controls the extent to which left-handed squarks decayvia a wino-like chargino or neutralino instead of directly to the LSP. β does not affect the jets+ /E T constraints, it doeshowever control the boundary between the qualitatively different regions wherethe neutralino is the NLSP and where a charged slepton(s) is the NLSP. This isbecause large tan β enhances the tau Yukawa and the negative contribution itgives to running stau mass. This particular search does not have sensitivity toa quasi-stable charged slepton NLSP and does not constrain the correspondingparameter space, hence the tan β -dependence of the straight diagonal edge tothe GMSB jets+ /E T exclusion contours.The aforementioned ‘extra’ parameters – tan β, A in the CMSSM andtan β, M mess for GMSB – do however affect the Higgs constraints. • Larger M mess is helpful if m ˜ g is also large, as discussed. • Large tan β increases m h at tree-level. • A is driven negative by the gluino during running, and so given thebenefit of sizable | A t | /M S at low scale to boost m h , the CMSSM cases A = − m , A = 0 and A = 2 m have increasing difficulty meeting theLEP bound.Each of these effects can be seen in Fig. 17.Fig. 20 shows the GMSB exclusion contours in terms of the original Λ G and Λ S model parameters. The projection of these to the squark-gluino massplane is Fig. 17(a).Though I have considered GMSB models with a single Λ S common to allthree gauge groups, and likewise for Λ G , the resulting exclusion contours havesome broader applicability to six-dimensional GGM parameter space. m h ismostly controlled by the mass of the squarks (truly the stop, but the squarkmasses are flavour-blind) and A t . In turn, the squark mass is controlled moreby Λ S, (at tree-level) and Λ G, (at one-loop level) than by Λ S ;1 , or Λ G ;1 , ,and similarly A t more by Λ G, than by Λ G ;1 , , in both cases because of thedominance of α over α , . Therefore splittings of Λ G ;1 , from Λ G, and of Λ S ;1 , from Λ S, are subdominant parameters for setting the Higgs mass comparedto Λ G and Λ S , and for small splittings the same contours will hold.Similarly the jets+ /E T signal depends most strongly on the squark andgluino masses, and thus on Λ G and Λ S (rather than splittings between thecontributions for different gauge groups). However (a) an independent Λ G, may be tuned to give a bino arbitrarily close in mass to the squarks or gluino, compressing the spectrum and removing the visible energy; (b) with six fullyindependent Λ parameters the NLSP has even more possible identities, manyof which correspond to signals that are most strongly constrained by dif-ferent searches, with the jets+ /E T search losing sensitivity. The qualitativechanges contained within the possibility (b) represent a strong dependence ofthe squark-gluino mass-plane contours set here on the decision to consider onlya subset of the full GGM parameter space.82 . . . . . . (Λ G / GeV)5 . . . . . . l og ( Λ S / G e V ) . . . . . . . . . .
76 4 . . . . . . (Λ G / GeV)5 . . . . . . l og ( Λ S / G e V ) . . . . . . . . . . . . . . . (Λ G / GeV)4 . . . . . l og ( Λ S / G e V ) . . . . . . . . . .
75 4 . . . . . . (Λ G / GeV)4 . . . . . l og ( Λ S / G e V ) . . . . . . . . . . Figure 20: Exclusion contours for models of GMSB with a messenger scale M = 10 GeV (top panels) or M = 10 GeV (bottoms panels), and tan β = 5(left panels) or tan β = 45 (right panels). The red solid line shows jets+ /E T constraints; the black solid lines show the result of varying the renormalisa-tion/factorisation scale by factors of 2 ± . The yellow dashed line shows theLEP Higgs limit. White regions have stau NLSP. The blue dotted line in thebottom right panel delineates a tachyonic stau region. The colour ‘axis’ is σ signal /σ limit for the most constraining search channel.83 Making The Most Of MET: Mass Recon-struction From Collimated Decays
This section is based on my work [124] done in collaboration with Michael Span-nowsky; excepting the extended Introduction the text here follows it closely. Inthis section neutralinos are denoted by ˜ N i , with χ indicating a generic invisibleparticle. Missing transverse energy –
MET – is of great importance at hadron colliders:it is our only way of inferring the presence of neutral (collider-)stable particles χ , be they neutrinos or BSM particles. However whenever two such particlesare produced (which will always be the case if their stability is due to a Z symmetry, for example) our observation only of the vectorial sum of theirtransverse momenta / p T = p a,T + p b,T thwarts reconstruction of masses in thedecay cascade ending with 2 χ . Popular methods for searching for heavyparticles with partially invisible decays are transverse mass observables [187], M T [188], razor analyses [189] and kinematic edges [190]. I will introduce thefirst two of these.Consider events with a single leptonically decaying W , Fig. 21(a), togetherwith any number of jets and photons (but not leptons: the sole lepton and / p T can then unambiguously identified with the lepton l and neutrino ν from the W ). We have m W = ( E l + E ν ) − ( p l + p ν ) = m l + m ν + 2( E T,l E T,ν cosh(∆ y lν ) − p T,l · p T,ν ) , (8.1)where y is rapidity and E T is transverse energy – √ ( m + p T ). At hadroncolliders we do not know the neutrino’s momentum in the z direction and hence m W is not calculable this way. A quantity we can calculate is the transversemass m T : m T ≡ ( E T,l + E T,ν ) − ( p T,l + p T,ν ) = m l + m ν + 2( E T,l E T,ν − p T,l · p T,ν ) (8.2)As we have just one invisible particle, p T,ν = / p T ; and since that it is massless, E T,ν = | / p T | ; hence the calculability of Eq. (8.2). Comparing it to Eq. (8.1) wesee that m T ≤ m W ; an inequality which is violated by the finite width effects χ could also be directly produced, giving a final state with, at leading order, no largetransverse energy (visible or invisible). The universal possibility of hard initial state radia-tion allows essentially model-independent limits to be set on the direct production of new χ particles from monojet and monophoton searches. Here I will focus only on production of2 χ via a decay cascade. ± l ± ν l (a) X visinvis (b) Figure 21: Cartoons of leptonic W decay (left panel) and the decay of oneparticle X to multiple visible and multiple invisible particles (right panel).of the W and by detector resolution effects. If the inequality were typically farfrom being saturated it would not be helpful for determining m W . Howeverthe cross-section is enhanced at m T ≈ m W [191]: dσdm eν m eν,T ∝ Γ W m W ( m eν − m W ) + Γ W m W m eν (cid:113) m eν − m eν,T , (8.3)leading to what is called a Jacobian peak in the distribution right before theend point at m W .Now consider a single particle X decaying into multiple visible and multipleinvisible particles, Fig. 21(b). By replacing the four-momenta of the lepton andneutrino in Eq.s (8.1) and (8.2) by the four-momenta summed over the visibleand invisible particles respectively, the same arguments give a transverse masswith a Jacobian peak and end point at m X . There is one further subtlety herehowever, in that while / p T can still be identified with the summed p T of theinvisible particles, the transverse energy of the invisible system is not given by | / p T | since the invariant mass of invisible system is in general not zero. Thelatter vanishes only if all the invisible particles are massless and travelling inexactly the same direction. In the absence of further information this must beassumed to be true; neglecting the invisible mass leads to an underestimateof the invisible transverse energy, thus smearing the Jacobian peak to valuesfurther below the end point at m X .For two partially-invisibly decaying particles, with the example of slep-tons ˜ l decaying to leptons and neutralinos ˜ N shown in Fig. 22, the transversemass as defined in the previous paragraph can be calculated in the same way.However the result is now constrained to be less than or equal to the massof the two-slepton system: if the sleptons are moving relative to each other,their combined mass is not an interesting quantity. What we would like isa transverse mass associated with each separate slepton – by definition thesequantities would have end points at the slepton mass (for simplicity this ex-ample has a common mass for both decaying particles). Their calculationwould require knowledge of the decomposition of / p T into the two contributing85 l + l + ˜ N ,a ˜ l − ˜ N ,b l − Figure 22: The decays of two sleptons to leptons and neutralinos – an examplesignal for the M T variable.components: / p T = p T, ˜ N ,a + p T, ˜ N ,a , (8.4)then we would havemax { m ( p T , l + , p T , ˜N , a ) , m ( p T , l − , p T , ˜N , b ) } ≤ m , (8.5)Since we don’t know the decomposition of / p T , we cannot calculate the twotransverse masses in the correct way. If we were to use an incorrect decompo-sition and calculate the LHS of Eq. (8.5), it would no longer be guaranteed tobe smaller than m l . If we were to try every possible decomposition, triviallythat would include the correct decomposition; the smallest value of the LHSof Eq. (8.5) obtained this way would therefore be equal to or smaller than thecorrect decomposition, which is equal to or smaller than m l . Hence M T ≡ min / p + / p = / p T (cid:18) max { m ( p T , l + , / p ) , m ( p T , l − , / p ) } (cid:19) ≤ m l (8.6)In many examples M T has been shown to nearly saturate the inequality, thusgiving a visible end point which is useful for measuring a mass.Here I wish to consider the circumstances under which we can do bet-ter than transverse masses and end points, by deducing the four-momentumassociated with each of the two invisible χ particles separately and thenceconstructing mass peaks. Clearly some feature of the rest of the event mustsuggest the correct decomposition of / p T into p a,T + p b,T . If there are twowell-localised visible objects that we expect, from some prior prejudice aboutthe kinematics, to be parallel or antiparallel to the two unseen χ particles,then we have two directions in the transverse plane to give us p a,T and p b,T .Furthermore we can add longitudinal components to each of these two trans-verse vectors to make them (anti)parallel to their corresponding visible object86n three dimensions, giving approximations for p χ a,b . If χ is much lighter thanthe particle produced in the hard scattering, i.e. at the start of the decay cas-cade, we can promote p χ a,b to massless four-vectors; I will show that combinedwith the four-vectors for the visible decay products, a strong mass peak forthe initial particles can be reconstructed. Parallel or antiparallel visible and missing energy is not worth considering onlyfor its ease: it can arise in many circumstances. Spin correlations may make χ particles approximately (anti)parallel to other particles. Two-body decaysof particles P nearly stationary in the lab frame are back-to-back: thereforein 2 P → χ + 2vis, each χ is nearly antiparallel to one of the ‘vis’. However antiparallelness , unlike parallelness , is not preserved under z -boosts of themother particle, and the z -boost is unknown.A promising scenario is when each χ is produced together with visibleenergy from the decay of a boosted particle. This will arise whenever (a)directly pair-produced particles are appreciably heavier than whatever theydecay into in the first step of the cascade, and (b) χ are created following twoor more steps. Together these points imply that each of the two sides of theevent (separated according to the mother particle) contains an intermediateparticle which is boosted: the visible object(s) and χ it ultimately decays towill be collimated.For some examples, consider the quintessential Susy decay of a pair-producedsquark to a hard jet and light neutralino : ˜ q → q + ˜ N . There are many rea-sons why we might expect ˜ N to be unstable, decaying to visible energy and alighter, neutral, collider-stable particle – the latter could be: • a gravitino ˜ G , if Susy breaking is mediated at a low scale, i.e. someform of gauge mediation. A low mediation scale is motivated by elec-troweak naturalness and an automatic solution of the Susy flavour prob-lem. See [192] for a comprehensive list of possible collider signatures. • a pseudo-Goldstino ˜ G (cid:48) , if more than one hidden sector independentlybreaks Susy and mediates it to the visible sector, as may occur in stringtheory or quiver gauge theories [193, 194]. See [195] for the colliderphenomenology. • a singlino ˜ S in the NMSSM. See [123] for the modified collider signals. • a new photino ˜ γ (cid:48) , if the MSSM is extended with one or more extra U (1) gauge symmetries, as is commonly expected to arise from the com- In this case the decay is not really ˜ q → q + ˜ N → . . . but ˜ q → q + ˜ N → ˜ N + . . . , sincenew photinos/singlinos actually mix with the MSSM neutralinos. If ˜ N is mostly ‘MSSM-like’ (any mixture of Higgsino, wino and bino), and ˜ N is mostly singlino or a new photino,then direct decay of ˜ q to ˜ N is suppressed relative to the two-step decay. N here is the lightest ordinary supersymmetricparticle (LOSP). All of these examples have some other particle as the trueLSP, and so cosmology allows a charged or coloured Susy particle to be lighterthan ˜ N and take its place as the LOSP in the cascade ˜ q → vis + (LOSP) → vis + (vis + LSP), giving different visible energy. I will elaborate on the strategy outlined in Section 8.1 in terms of a concreteexample to allow clearer references to the particles involved in the signal: Iconsider the classic gauge-mediation decay q → q + 2( ˜ N ) → q + 2( ˜ G + γ ).The lightest neutralino is typically expected to be considerably lighter thanthe squarks in this scenario, as renormalisation-group evolution tends to drivesquark masses up and the bino mass down, and the phenomenon of gauginoscreening in the simplest models makes the gauginos much lighter than thescalars (see e.g. [198]). This simple observation gives a powerful handle on thesignal, as yet unexploited: the gravitinos and photons are normally collimated.It is exploited as follows.1. Uniquely decompose / p T into p a,T + p b,T which are defined to be parallel,in the transverse plane, to the two hardest isolated photons.2. Promote p a,b ; T to three-vectors p a,b by adding the longitudinal compo-nents required to make them parallel to each of the photons in threedimensions.3. Promote p a,b to massless four-vectors p µa,b = ( | p a,b | , p a,b ), giving approxi-mations for the two gravitino four-vectors. Adding each of these to thefour-vector of the collinear photon gives massless approximations for thetwo neutralino four-vectors, p µ ˜ N a,b .4. If each neutralino ˜ N a,b can be paired with the ‘correct’ jet in the event j a,b , then taking the invariant mass of each pair reconstructs the mass ofthe initial squarks: M a,b = ( p µ ˜ N a,b + p µj a,b ) Steps 1-2 above reconstruct the three-momenta of the two neutralinos inthe same way as is done for the two τ in H → τ → e ± µ ∓ /E T with the collinearapproximation of [199]. There, the two τ four-momenta are added together toget the mass of the single mother particle; here the four-momenta of the twoneutralinos are separately added to those of other visible particles in the eventto get the masses of two mother particles – step 4.Step 4 needs a criterion for the correct way to pair each reconstructedneutralino with one of the jets in the event. The correct jet is considered to be A similar final state may arise from Universal Extra Dimensions [197], though semi-invisibly decaying Kaluza-Klein photons from KK quark/gluon decays are not generallyexpected to be boosted; this will be important for my analysis. q → ˜ N + q decay. Keeping only the two hardest jets, there are two arrangements – twoways of pairing each neutralino with a different jet. More generally one canconsider the N hardest jets in the event, giving N ( N −
1) arrangements tochoose from. Each squark is generally produced nearly at rest, therefore theneutralino and jet into which it decays are likely to be back-to-back; the jetis also expected to be hard, with an energy of roughly half the squark’s mass.Therefore one criterion is to pair the two neutralinos ˜ N a,b with jets j a and j b so as to make maximally negative the sum of dot products between thethree-momenta of each neutralino and its jet:criterion α : − (cid:16) p ˜ N ,a . p j a + p ˜ N ,b . p j b (cid:17) maximalIf the pair-produced squarks are mass degenerate, this can also be exploited:the two reconstructed masses should coincide. This gives the second possibilityfor finding the right jets:criterion β : (cid:12)(cid:12)(cid:12) ( p µ ˜ N ,a + p µj a ) − ( p µ ˜ N ,b + p µj b ) (cid:12)(cid:12)(cid:12) minimalEach criterion suggests the correct jets, defining two reconstructed masses M a,b = ( p µ ˜ N a,b + p µj a,b ) . The maximisation/minimisation above is not differ-ential but discrete – the quantity is calculated once for each of the N ( N − N = 2 is optimal in theexample considered) and could potentially be incorporated into a search atthe trigger level. These two criteria are not specific to neutralinos and jets:they are relevant for final states where two objects need to be paired correctlywith two other objects, both being the decay products of pair-produced par-ticles (the second criterion also requires mass degeneracy of the two motherparticles). The solution chosen for this same problem in [200], for mass recon-struction of leptogluon pairs from l ¯ l → llgg , was to assign a hemisphere toeach of the two hardest leptons and then pair with each lepton the hardest jetin the same hemisphere.I considered a simplified model with squarks of the first two generations, abino-like neutralino and a gravitino with masses m ˜ q = 1 . m ˜ N = 100 GeVand m ˜ G = 1 eV respectively; this squark mass is at the edge of the strongestcurrent constraints [201]. I calculated a full spectrum for this simplified model(all other superpartner masses are set 2 TeV) with SOFTSUSY 3.3.4 and decaywidths with
Herwig++ 2.6.1 . I then followed two routes to get to observabledistributions. In the first,
MadGraph 5 1.5.5 [202] supplied the matrix ele-ments for disquark production; the subsequent decays, extra radiation, show-ering and hadronisation were performed by
PYTHIA 6 [203]; and detector re-sponse was simulated with
PGS 4 [204]. In the second,
Herwig++ was used togenerate the complete event; jets were defined with
FastJet 3.0.3 , and thefinal state objects analysed in the
RIVET 1.8.1 framework. My kinematical89nalysis – steps 1-4 with criteria α and β above – was then applied. Code fordoing this, easily generalisable to other final states, can be found at [131].We chose basic cuts for the analysis as follows: • At least two jets, clustered using the anti-kt algorithm [205] with sizeparameter 0.4. Jet candidates are required to have p T >
30 GeV and | η | < . • At least two isolated photons with p T >
10 GeV. On the
MadGraph - PYTHIA route,
PGS handles isolation. On the
Herwig route, I considereda photon isolated based on total transverse energy deposited inside asurrounding cone (as in the relevant searches [201, 206]), specifically5 GeV in a cone ∆
R < . • A minimum and maximum azimuthal angular separation between the twohardest isolated photons (cid:15) < ∆ φ γ γ < π − (cid:15) with (cid:15) = 0 .
01, since photonswhich are exactly (anti)parallel in the transverse plane do not allow / p T decomposition. • The missing energy vector / p T should lie in between the two photons inthe transverse plane (i.e. inside the smaller of the two sectors delim-ited by the two photon directions). This ensures that the event has / p T corresponding to the ansatz of both gravitinos being parallel (and notantiparallel) to their photons. With this cut the kinematics are alwaysin the ‘trivial zero’ of the M T observable (see [207]).Decomposition of / p T of course requires /E T (cid:54) = 0; in practice this is always satis-fied. I did not cut on /E T – I analysed this particular signal not to optimise theassociated cuts but simply as a demonstration of the mass reconstruction tech-nique. In the present example more than 90% of events have /E T >
100 GeVand so a large requirement could be placed as in existing searches (likewise forthe leading jet and photon which are typically hard in the signal). Note thatwith a requirement for hard photons and /E T there is typically very small back-ground for new physics [208] and the priority is an observable that increasesthe visibility of the signal alone, ideally through a resonance.Fig. 23 shows the results of the analysis. The initial mass is reconstructedto 10% accuracy for roughly of events passing the basic cuts. Multiplyingby the Prospino 2 production cross-section and the acceptance – 20 fb × .
5– one would expect O (100) events inside this peak in 30 fb − at 8 TeV.I obtained an estimate of the expected accuracy of the mass reconstructionin the case of 100 signal events in the following way. Roughly 5000 signalevents were split into samples each containing 100 events. For each samplethe 100 values of the calculated mass were binned, and the mid-point of themodal bin taken to define the position of the peak and thus the reconstructedmass for that sample. The values of the mass determined from each of the ∼
50 samples then define a probability distribution quantifying how well the90 ( / σ ) d σ / d M r e c M rec [TeV] 0 0.5 1 1.5 0.4 0.8 1.2 1.6 2 ( / σ ) d σ / d M r e c M rec [TeV] 0 0.5 1 1.5 2 0.4 0.8 1.2 1.6 2 ( / σ ) d σ / d M r e c M rec [TeV] 0 0.5 1 1.5 2 0.4 0.8 1.2 1.6 2 ( / σ ) d σ / d M r e c M rec [TeV] Figure 23: The squark mass ( m ˜ q = 1 . pp → q → q +2( ˜ N ) → q + 2( ˜ G + γ ) reconstructed with / p T decomposition and neutralino-jet pairing as described in the text. The masses of the lightest neutralinoand gravitino are m ˜ N = 100 GeV , m ˜ G = 1 eV; the centre of mass energy is8 TeV. Panels on the left (right) show the mass of the squark calculated fromthe leading (sub-leading) photon in each event. Upper (lower) panels pairjets with reconstructed neutralinos using criterion α ( β ). The blue dashed lineshows events generated by MadGraph and
PYTHIA , with fast detector simulationperformed by
PGS ; the red solid line shows events generated by
Herwig++ .mass can be reconstructed from 100 events. If the events within each samplewere binned into too few bins, this distribution is precise but inaccurate: allsamples will agree which is the modal bin, but since it is wide its centre maybe far from the true mass. If the events within each sample were binnedinto too many bins, this distribution is accurate but imprecise: the modalbin is narrow, but a low count-per-bin exacerbates statistical fluctuations andmay randomly shift the modal bin to one not containing the true mass. Forthis example with a true mass of 1 . . ,
2] TeV, about 50 bins is appropriate for 100 events . Witheach sample binned thusly, the distribution of the values calculated from allthe samples has a root-mean-square deviation from the true mass of roughly Note that 50 bins for 100 events naively suggests 2 events per bin and thus huge relativefluctuations, however the calculated masses are not flatly distributed – they cluster arounda sharp peak by construction.
910 GeV: determination at the 5% level. To put this number (very) roughly incontext, the ATLAS Collaboration’s Technical Design Report [169] consideredthe signal 2 × (˜ q → χ q → ˜ llq → χ llq ) arising as part of a CMSSM (notsimplified) model. Using kinematic edges,the squark mass could be determinedto within 3% with O (10 ) events before cuts; c.f. 5% with only 200 eventsbefore cuts (100 events after cuts) for the signal and method considered here.As my analysis makes use of hard jets arising from the decay of signalparticles, it could in principle be affected by the (higher order) production ofadditional jets in the hard scattering. To investigate this I simulated 2˜ q and2˜ q +1jet production and combined these consistently into a single sample usingthe MLM matching procedure [209]. The reconstructed mass distributions areessentially identical to those of simple 2˜ q production shown in Fig. 23, whichfollows from the fact that my method is designed to find the two jets that lookmost like they have been produced by the decay of the squarks, and discardother jets.Criterion α can also reconstruct the masses of pair-produced non-degenerate particles. In Fig. 24 it is used to analyse the same signal as previously butnow with one squark from the first two generations having mass 1 . . different mass (fourlighter squarks and four heavier would merely result in a dominant produc-tion of two of the lighter four); nevertheless production of two squarks of the same mass still has non-zero cross-section. Thus the distribution of the larger(smaller) of the two masses calculated for each event peaks strongly at 1 . . . . The final state of the example considered has two jets and two pairs of roughlycollinear photons and gravitinos. The jet could be replaced by any other visi-ble particle – ‘vis ’ – the photon too – ‘vis ’ – and the gravitino by anythinginvisible, χ : I show this general topology in Fig. 25. Provided there are twosemi-invisible decays which are boosted (or forced into (anti)parallel behaviourby spin correlations) the same analysis presented here should in theory havesome potential for mass reconstruction. Of course if vis and vis are objectsless clean experimentally than light-flavour jets and photons, such as b quarksor even combinations of particles, the procedure will be more difficult in prac-tice. Searches for mass peaks in the manner presented, considering variousdifferent particle types for vis , , could discover expected or unexpected reso-nances. Below, I outline how the method might be adapted as the topology isdistorted and generalised further. 92 ( / σ ) d σ / d M r e c M rec [TeV] Figure 24: As Fig. 23 but with one squark from the first two generationshaving mass 1 . . α for jet-neutralino pairing is used. Events aregenerated with Herwig++ . vis vis χχ vis vis Figure 25: The topology I consider: pair-produced particles each decay into avisible Standard Model particle vis and a much lighter particle, which is thusboosted; this decays semi-invisibly into vis and χ . A Less Boosted Intermediate.
Collinearity of χ and vis relies on their com-mon mother particle being boosted; as it becomes less boosted they becomeless collinear. I show this effect, and the decreasing sharpness of the mass re-construction that results, in Fig. 26 for my previous gauge-mediation example. m ˜ N is increased from 100 to 400 GeV for constant m ˜ q = 1 . N is madeheavier still, e.g. m ˜ N /m ˜ q →
1, the increasingly lethargic neutralino gives aless collimated photon-gravitino pair; indeed the two are increasingly back-to-back, and most events fail to meet the requirement that / p T be in between thetwo photons. More Decays Of The Intermediate.
If vis is several particles instead ofthe single photon γ I considered, e.g. a lepton pair from a boosted ˜ N i → ( / σ ) d σ / d ( ∆ R ) ∆ R( γ G~ ) ( / σ ) d σ / d M r e c M rec [TeV] Figure 26: As Fig. 23, holding the squark mass at 1 . m ˜ N = 100, 200, 400 GeV are shown with red solid, greendashed, and blue dotted lines respectively. The greater m ˜ N , the less collinearits photon and gravitino daughters become, as shown by ∆ R ( γ ˜ G ) (averagedbetween the two γ ˜ G pairs) in the left panel. This worsens the mass recon-struction: the right panel shows one of the two masses found using one of thetwo jet-neutralino pairing criteria (all four quantities behave similarly – seeFig. 23). Events are generated with Herwig++ . l ± l ∓ ˜ N decay, by construction they will be collimated and the sum of theirfour-momenta can be used in place of p µγ in the analysis. More Decays Before The Intermediate.
If the directly pair-produced par-ticles decay to a boosted intermediate and two visible particles rather thanone – via two on-shell steps or a three-body decay – then each vis in Fig. 25is replaced by two particles which are not collinear. Criterion α is then notapplicable but criterion β is, albeit with greater combinatorial ambiguity fromthe need to pair each reconstructed neutralino with two other visible objects.In this scenario the boosted intermediate is also less boosted from sharing itsenergy with more particles, making its semi-invisible decay less collimated. De-spite these difficulties the method is reasonably successful: for Fig. 27 I havegenerated events for a simplified model with pair-produced gluinos of mass1 . q ¯ q ˜ N ( q now denoting a quark of any of the three genera-tions) with the 100 GeV neutralino decaying to γ ˜ G . Neutralino-jet pairing isperformed with criterion β generalised in the obvious way to include four jetsrather than two. Other Combinatoric Complications.
If vis = vis , e.g. if in my former ex-ample photons were replaced by jets or jets by photons (but not both of theseat once), then there would be a combinatoric ambiguity not just in pairing thereconstructed boosted intermediate with the correct vis but also in which two94 ( / σ ) d σ / d M r e c M rec [TeV] 0 0.5 1 1.5 2 0.4 0.8 1.2 1.6 2 ( / σ ) d σ / d M r e c M rec [TeV] Figure 27: The gluino mass ( m ˜ g = 1 . pp → g → q +2( ˜ N ) → q + 2( ˜ G + γ ) reconstructed with / p T decomposition and neutralino-jet pairing criterion β as described in the text. The masses of the lightestneutralino and gravitino are m ˜ N = 100 GeV , m ˜ G = 1 eV (the masses of otherparticles are set at 2 TeV); the centre of mass energy is 8 TeV. Panels onthe left (right) show the mass of the gluino calculated from the leading (sub-leading) photon in each event. The blue dashed line shows events generatedby MadGraph and
PYTHIA , with fast detector simulation performed by
PGS ; thered solid line shows events generated by
Herwig++ .particles define the initial / p T decomposition directions. The requirement that / p T be in between the two visible particles onto which it is decomposed elimi-nates some of the possible decomposition configurations; for the rest, criterion β can be generalised to be an optimisation over decomposition configurationsas well as pairing possibilities. More Than Two Invisible Particles.
With a third χ in the final statewhich is expected to be (anti)parallel to one of the first two, our ansatz forthe topology still contains only two invisible directions and we can uniquelydecompose the observed / p T . If the two invisible particles that are (anti)parallelhave come from the decay of the same particle, we only need to know the sumof their momenta and so we can reconstruct the mass as before. Howeverif they have come from the decay of two different particles, then we needtheir individual momenta for mass reconstruction; knowing only their sum, themasses we wish to calculate are under-constrained by one parameter. Anotherpossibility is 3 χ in the signal final state with three different expected directions:there are then three vectors in the transverse plane into which / p T can bedecomposed, with any two of the three giving a unique decomposition. Thereare three ways to choose two vectors from the three. We may have the / p T inbetween the two vectors in 0, 1 or 2 of the three ways (neglecting the possibilityof exact collinearity between / p T and one of the vectors). If 0, we veto. If 1,there is a unique decomposition. If 2, / p T can be expressed as some amountof one of the decompositions plus some amount of the other, with the two95oefficients constrained to sum to unity: the masses we wish to calculate areunder-constrained by one parameter. One response, not physically motivated,would be to veto. Another possibility could be to set the two coefficients basedon some other prejudice about the kinematics, such as making an intermediateparticle of known mass maximally on-shell. Which of these three cases (0, 1or 2 of the possible decompositions being acceptable) we have will vary on anevent by event basis. 96 ppendix A The Higgs As A Pseudo-Nambu-Goldstone Boson (‘Little Higgs’) See [210] for a review and original references, also [211, 212]. When a continu-ous global symmetry G is spontaneously broken to a subgroup H , Goldstone’sTheorem tells us that dim( G ) − dim( H ) Nambu-Goldstone bosons (NGBs) re-sult. (Mild explicit breaking, i.e. from small terms in the Lagrangian, resultsinstead in pseudo -NGBs (pNGBs), which are light but not massless.) If G contained a subgroup F which was gauged, and which gets broken down to I ≡ F ∩ H , then dim( F ) − dim( I ) of the NGBs are eaten to give masses toall of the bosons associated with the generators of the (entirely broken) F / I coset. This leaves (dim( G ) − dim( H )) − (dim( F ) − dim( I )) NGBs. I shouldbe (or contain) the Standard Model SU (2) L × U (1) Y ; and amongst the NGBswe want there to be (a) three degrees of freedom that will ultimately be eatenby the W and Z bosons, as in the Standard Model, but also (b) a degree offreedom corresponding to the Higgs. In this way we have made the Higgs light;unfortunately we have made it massless, however one thing at a time.Now let us add a term to the Lagrangian which involves the Higgs, respects H , but breaks G . The Higgs is now a pNGB rather than a NGB. However werun straight back into the hierarchy problem: through such a term we havea one-loop quadratic divergence, previously forced to be zero by virtue of thesymmetry only being broken at the level of the vacuum and not the Lagrangian.So instead of explicit breaking with one term, we can use two: we add to theLagrangian λ L + λ L which only break G to H when they are present to-gether, a phenomenon we call collective symmetry breaking . With only oneof the terms, there is a sufficient amount of symmetry broken only sponta-neously and not explicitly that the Higgs is still a NGB. With both terms it isa pNGB, and we do have quadratic divergences as we must with explicit sym-metry breaking, but only at two loops: δm H ∼ ( λ / π )( λ / π )Λ , sinceby construction all corrections identically vanish when one coupling vanisheseven if the other does not. The one-loop divergence due to the top loop iscancelled by a similar diagram containing in the loop a fermionic partner ofthe top – a partner which is a necessary part of such a setup.The expected cutoff for naturalness is then postponed to 10 TeV ratherthan 1 TeV. We must then ask what comes next. Perhaps nothing moreuntil the Planck scale, if our global symmetry G was broken spontaneously bya weakly coupled scalar whose vacuum expectation value v little Higgs is, whilelarger than v SM = 174 GeV, still mysteriously smaller than M P . There couldbe a tower of stacked little Higgs models – each symmetry-breaking scalar beingthe pNGB of a different symmetry at higher scales – as discussed in [213]. Inthis case, while the scalar at the bottom of the stack (which plays the role ofthe Standard Model Higgs) encounters the quadratically divergent correctionat progressively higher orders, increasing numerical factors multiplying theterm do not allow escape from the Hierarchy problem. Alternatively one of97he three traditional natural theories may come into play above this cutoff –extra dimensions, compositeness, or Susy.98 ppendix B Optimal Naturalness Beyond Lead-ing Log δm H u The leading log expression for δm H u is obtained by ignoring the scale depen-dence of A t and M S ; to do better we can integrate A t and M S over their varyinghigher-scale values. First consider the running of A t and M S with arbitraryself and mutual couplings, as well couplings to other particles: ddt (cid:18) M S ( t ) A t ( t ) (cid:19) = (cid:18) a ( t ) b ( t )0 c ( t ) (cid:19) (cid:18) M S ( t ) A t ( t ) (cid:19) + other running soft-mass terms(B.1) ∴ (cid:18) M S ( t ) A t ( t ) (cid:19) = (cid:18) d ( t ) e ( t )0 f ( t ) (cid:19) (cid:18) M S (0) A t (0) (cid:19) + other high-scale soft-mass terms,(B.2)where a, b, c are running couplings and d, e, f are related to the former by in-tegration, and the lower-left entry of the matrix must vanish since A t appearsin the Lagrangian, not A t . Note that if the other soft-mass parameters them-selves run due to A t and M S , this feeds back into Eq. (B.2) as corrections tothe coefficients d, e, f suppressed by an extra loop factor, which could have animpact but I will neglect this for simplicity. Integrating the m H u beta func-tion (2.2), keeping just the stop-sector terms as before but now including theirscale dependence (and that of the top Yukawa) as in (B.2), we have δm H u ( t ) = π (cid:32) M S (0) (cid:90) t (cid:48) = tt (cid:48) =0 y t ( t (cid:48) ) d ( t (cid:48) ) dt (cid:48) + A t (0) (cid:90) t (cid:48) = tt (cid:48) =0 y t ( t (cid:48) )(2 e ( t (cid:48) ) + f ( t (cid:48) )) dt (cid:48) (cid:33) + other high-scale soft-mass terms (B.3)For Lagrange constrained optimisation, Eq. (2.5), we must differentiate δm H u and δm h with respect to A t and M S . One can either invert Eq. (B.2) andsubstitute into Eq. (B.3) to obtain δm H u instead as a function of the low -scalestop parameters, or all derivatives can be taken with respect to the high -scalestop parameters (using the chain rule for δm h , whose arguments should beevaluated at the low scale). Both approaches give the same result, as theymust: f ( t ) (cid:32) (cid:90) t (cid:48) = tt (cid:48) =0 y t ( t (cid:48) ) d ( t (cid:48) ) dt (cid:48) + d ( t ) y t ( t ) (cid:18) A t ( t )2 M S ( t ) (cid:19)(cid:33) ∂ ( δm h ) ∂A t = (cid:32) d ( t ) (cid:90) t (cid:48) = tt (cid:48) =0 y t ( t (cid:48) )(2 e ( t (cid:48) ) + f ( t (cid:48) )) dt (cid:48) − e ( t ) (cid:90) t (cid:48) = tt (cid:48) =0 y t ( t (cid:48) ) d ( t (cid:48) ) dt (cid:48) (cid:33) ∂ ( δm h ) ∂M S (B.4)99he leading log relation is recovered for ( d ( t ) , e ( t ) , f ( t )) = (1 , , , y t ( t ) = y t .So what are these functions d ( t ) , e ( t ) , f ( t ) in the MSSM? Expressions for theone-loop running parameters can be written down when all Yukawa couplingsexcept that of the top are set to zero [214, 215, 216]. y t ( t ) and A t ( t ) donot require numerical integration if one also sets the U (1) and SU (2) gaugecouplings to zero: one finds y t ( t ) = y t (0) ξ − / ( t ) G − ( t ; − ) (B.5) A t ( t ) = G − ( t ; − ) (cid:20) A t (0) + 169 M (0) (cid:16) G ( t ; − ) ξ − ( t ) − G ( t ; − ) (cid:17)(cid:21) (B.6)where ξ ( t ) = 1 + 32 π α (0) tG ( t ; n ) = 1 − π y t (0) (cid:90) t dt (cid:48) ξ n ( t (cid:48) )From (B.6) we can read off that f ( t ) = G − ( t ; − ). In this same scheme for ex-tracting running parameters, the stop mass necessarily involves numerical inte-gration. However RG-induced splitting of the stop from the lighter-generationup-type quarks is typically small (and if not the model is necessarily unnatural,as mentioned in Section 1.4), so that the running stop is well approximated byits high-scale value plus the gluino-induced term, the latter easily obtained byintegrating the one-loop running gluino mass: M S ( t ) = M S (0) + 89 M (0) (cid:18) α ( t ) α (0) − (cid:19) (B.7)This gives d ( t ) = 1 , e ( t ) = 0. Eq. (B.4) is then2 G − ( t ; − ) (cid:32)(cid:90) t (cid:48) = tt (cid:48) =0 (cid:18) ξ − / ( t (cid:48) ) G − ( t (cid:48) ; − ) (cid:19) dt (cid:48) + ξ − / ( t ) G − ( t ; − ) (cid:18) A t ( t )2 M S ( t ) (cid:19) (cid:33) ∂ ( δm h ) ∂A t = (cid:90) t (cid:48) = tt (cid:48) =0 (cid:18) ξ − / ( t (cid:48) ) G − ( t (cid:48) ; − ) (cid:19) dt (cid:48) ∂ ( δm h ) ∂M S (B.8)The integrals can be done analytically and the resulting root, x natural , found;I do not plot it as it is essentially indistinguishable from the one shown inFig. 3, even for very high mediation scales Λ ∼ GeV. Thus my attempt atan approximate RG improvement of δm H u (resumming all the logs that comewith appreciable coupling constant factors) makes no difference to the resultobtained from the leading log expression.An alternative approach to this approximate RG improvement would beto work consistently at Next-to-Leading-Log NLL order for δm H u . Barbieri-Giudice fine-tuning measures are given at NLL in [68], from which one can100xtract the dependence of δm H u on any trilinear mixing term or sfermionmass-squared via δm H u ( A i ) = (cid:90) (cid:18) M Z A i Z A i | tan β →∞ (cid:19) dA i (B.9) δm H u ( m f ) = M Z Z m ˜ f (cid:12)(cid:12)(cid:12) tan β →∞ (B.10)where Z p i ≡ ∂ (log M Z ) ∂ (log p i )Note that δm H u and m f both having mass dimension 2 results in Z m ˜ f ∝ m ˜ f ,giving the simpler expression (B.10). Z A i however can contain further mass-scales beyond A i ; indeed for the stop, Z A t contains terms with M , M and A b .In other words, at NLL δm H u depends on A t not only through an A t term butalso through terms A t M , A t M and A t A b . In the spirit of connecting δm H u and the Higgs mass to the stop sector in isolation, I will not explore this effecthere. 101 ppendix C Brazil-Band Plots For Dummies Brazil-band plots are used to present exclusion limits for the existence of newparticles whose decays are uniquely determined as a function of their mass .Before any data is collected, one can calculate the expected σ CL /σ signal forany given mass m of the new particle. σ signal is defined as in Section 6.1 ( σ un-fortunately also denotes uncertainty/error in predictions and measurements).I will refer interchangeably to numbers of events N and cross-sections σ asthey differ only by a factor of the integrated luminosity L , through N = σ L . σ x % CL is the cross-section excluded at x % confidence level: roughly, the signalcross-section for which the probability of signal plus background fluctuating aslow as the observed value is less than (100 − x )% (a statement to be clarifiedshortly). x = 95 is common in particle physics. The expected σ CL refers tothe limit that would be set if the number of events observed coincided withthe number of events expected from the background alone.Let’s develop a feel for how the expected σ CL /σ signal will behave. Wepredict a certain number of events from the background N b and from thesignal N s , and there is uncertainty in both predictions. Statistical error, be-cause both follow from probability distributions. Systematic error, becausethe experiment may consistently mismeasure something, and the theoreticalestimate is made to a finite order in perturbation theory and so is consis-tently a little off. Now, if the predicted number of signal events is roughlyless than the combined uncertainty in the background and in the signal, thenobserving only the expected number of background events N b and no more stillwouldn’t allow you to sensibly exclude the presence of signal – it’s sufficientlysmall as to be compatible with N b events, within the errors. In this case wehave σ CL /σ signal >
1. On the other hand if the predicted number of signalevents is roughly greater than the combined uncertainty, then the presence ofthe signal ought to be visible above the background; in this case observingonly N b events and no more would be compelling evidence for no signal, givingan exclusion: σ CL /σ signal < ± σ, ± σ uncertainties in the ex-pected exclusion – the edges of the green and yellow regions respectively.Where do these come from? We have a probability distribution for the numberof background events (with an expected value N b ). For each possible numberof observed events, we would set a different limit, because more (less) eventsmakes the signal look more (less) likely to exist. Calculating the resultinglimit as a function of events then lets us transform the probability distribu-tion for number of background events into a probability distribution for theresulting limit. I am now in a position to clarify a previous statement – thatthe expected limit the limit that would result from observing N b events – morespecifically it is the median of the distribution I have just described. The ± σ If n extra parameters were needed to fully specify the decays, n extra dimensions wouldbe needed to plot the exclusion results. The Higgs boson of the Standard Model and the W (cid:48) , Z (cid:48) of the Sequential Standard Model, for example, all handily have n = 0. ± σ points in this distri-bution (which doesn’t have to be Gaussian – we just mean the points with thesame cumulative probability as the the ± σ points in the standard Gaussian).In a nut shell: the uncertainty in the expected limit is tied to the uncertaintyin the expected number of background events, and observing more (less) eventsmeans a weaker (stronger) limit.Once the experiment has actually been done, one can set observed σ CL /σ signal limits by seeing how much the signal plus background prediction deviates, rel-ative to the uncertainties, from the observed number of events N o (i.e. N o now plays the role that N b did for expected limits). Limits that are weaker(stronger) than expected will be set where more (less) events are observed thanexpected from the background alone. σ % C L / σ s i gna l mass / GeV 2 σ uncertainty1 σ uncertaintyexpectedobserved Figure 28: A (purely fictional!) Brazil-band plot for a search for a new particlethat actually exists, with mass m = 100 GeV. A discussion of the relevantfeatures is in the text.In Fig. 28 I show a fictional Brazil-band plot for illustrative purposes.What information is shown? We can see that the exclusion expected wasthe mass range m <
240 GeV (the ‘expected’ curve is below 1 here). Theobserved exclusion is something like m <
95 GeV, 105 GeV < m <
215 GeV,272 GeV < m <
278 GeV. The small region of exclusion around 275 GeVwasn’t expected – we call this lucky exclusion – it has arisen due to a negativefluctuation of the background. The region 215 GeV < m <
240 was expectedto be excluded, but wasn’t: the background has fluctuated upwards, but notdramatically: at most a 2 σ effect. Close to 100 GeV however, there is anarrow region where the exclusion is very much worse (i.e. the number of103vents observed is much higher) than expected. This is strong evidence for anew particle of roughly that mass! The width of the excess gets contributionsfrom the mass resolution and bin width of the relevant search channel, and theparticle’s own fundamental width; being dominated by whichever is largest.All three of these factors will smear out an otherwise delta-function peak inthe invariant mass distribution of the particle’s decay products. Note thatall the way along the plot the exclusion (and by implication the background)fluctuates above and below what was expected, but it’s only in places wherethe expected value of σ CL /σ signal is one side of 1 and the observed valueis on the other that the result of exclusion or non-exclusion deviates fromexpectation.For completeness I’ll mention ‘blue-band’ or signal strength plots, whichbecome useful when we have not just exclusion but also a (tentative) discovery.The signal strength, often denoted µ , is the factor by which the signal cross-section should be multiplied for best agreement with the data. There are anumber of points worth noting about µ . • An exact expression for µ , based on maximising the likelihood functionfor obtaining all of the observed data, would be complicated. Howeverfor a given mass value m that predicts N s signal events, with N b eventsexpected from the background and N o events observed (with N s , N b and N o referring to events in the vicinity of the signal at m , not elsewhere),one can think of µ as being given by N o ≈ N b + µN s . • µ = 0 ⇔ perfect description of the data by the background alone. µ =1 ⇔ perfect description of the data by the background plus signal, withno modification of the signal. Of course for either of these two statementsto be meaningful requires small error bars on µ : σ µ < (cid:28) • While physical cross-sections are positive semi-definite, µ is negativewhenever less events are observed than predicted for the backgroundalone. • How does µ relate to the Brazil-band plot? The approximate relation N o ≈ N b + µN s shows that µ is positive (negative) where more (less)events are observed than expected from the background alone. ThereforeSign( µ ) ≈ Sign (cid:18) σ CL, observed σ signal − σ CL, expected σ signal (cid:19) , (C.1)or in words, the sign of µ generally follows the sign of the fluctuation ofthe observed limit from what was expected. Knowing that a given massvalue is not excluded, the corresponding µ can in principle take any value.All positive values are possible since the lack of exclusion could be dueto the presence of a real signal, but with arbitrarily modified strength.All negative values of possible since we might be dealing with a deficit of104bserved events, but an insignificant deficit relative to our uncertainties.Knowing that a given mass value is excluded, the corresponding µ mustbe less than one, since exclusion refers specifically to the µ = 1 case.Therefore we can make one inference going the other way, from blueband to Brazil band: µ ≥ µ < σ CL is the signal cross-section for which the probabil-ity of signal plus background fluctuating as low as the observed value, denoted p s+b , is less than 5%. However sometimes by chance, N o will fluctuate so lowthat even the background-only hypothesis H b is a poor description of the data.If this is the case, we should consider the signal plus background hypothe-sis H s + b excluded only if does a much worse job even than H b at describingthe data. If it is merely equally poor, the data isn’t helping us to distinguishwhether or not there is any signal. Considering the signal to be excluded in thiscase based on p s+b < .
05 would be a spurious exclusion, and we can protectourselves against this using the CL s method [217, 218]. CL s ≡ p s + b / (1 − p b ),with p b the probability of the background alone contributing at least as manyevents as observed; and we now require CL s < .
05 instead of p s+b < . H b and H s + b do comparably poor jobs at describing a deficit inevents, p s + b ∼ − p b and CL s remains greater than 5%. By construction, theCL s method prevents cross-section limits becoming too small, and so the ± σ uncertainty in the expected limit will be asymmetric – smaller in the negativedirection than in the positive direction. This sometimes has the curious effectof counterbalancing the asymmetry induced by plotting σ CL /σ signal on alogarithmic scale! I mentioned that the width of the strong excess in Fig. 28 is given by acombination of the experimental mass resolution, bin width, and the hypoth-esised particle’s width. Indeed the width of any excess or deficit in the plot,significant or not, should be governed by this same combination; call it ∆ m .(Note that ∆ m may change with m : the physical width can obviously change,and particularly if we combine different search channels in one plot the massresolution and bin width can jump whenever we pass between regions wheredifferent search channels are most sensitive.) As an example, consider a deficitof events observed in the bin containing the mass value M , so that we havestronger constraints on the particle near m ∼ M , giving a dip in the Brazil-band plot at that point. Since the mass of the decay products will be smearedout by ∆ m , all points in the mass range | m − M | (cid:46) ∆ m will feel this localstrong constraint, and the dip in the plot will be ∼ ∆ m wide. The same logicapplies for a local excess giving a peak in the plot ∼ ∆ m wide. (Note thatin making Fig. 28 I was not careful about giving the random fluctuations areasonably consistent width.) Fluctuations a bit broader than ∆ m will occur Thanks to the other participants of the Carg`ese International School 2012 for posingthis question and to Glen Cowan for answering it! m , the less likely it isthat the source is a statistical fluctuation: overall we expect correlations overscales ∼ ∆ m . For much larger correlations / broader fluctuations, we are likelyto conclude that the background was inaccurately modelled in that mass range,so that the (background only) events we observed were consistently above orbelow the prediction. 106 ppendix D Cuts For The ATLAS Collabo-ration’s .
04 fb − Search For SusyWith Jets And Missing Energy
The cuts that were chosen for this analysis [177], and which I reproduced inthe
RIVET analysis
ATLAS 2011 S9212183 are as follows: • Jet candidates are reconstructed using the anti-kt jet clustering algo-rithm [205] with a distance parameter of 0 .
4. Candidates with p T <
20 GeV are discarded. • Electron (muon) candidates are required to have p T >
20 GeV (10 GeV)and | η | < .
47 (2 . • Jet candidates within a distance ∆ R = 0 . R = 0 . | η | > . • The event is vetoed if there are any electrons or muons with p T >
20 GeV. • Thereafter five separate search channels are defined, each with its ownrequirements on hadronic activity, m eff and /E T /m eff , summarised inTable 6. The effective mass m eff is calculated as the sum of /E T andthe p T of the two, three or four hardest jets used to define the searchchannel. In the high mass channel, all jets with p T >
40 GeV are used tocompute the m eff value used in the final cut. The ∆ φ cut is only appliedup to the third leading jet.Signal Region ≥ ≥ ≥ /E T > > > > p T > > > > p T > > > > p T – > > > p T – – > > φ (jet , / p T ) min > . > . > . > . /E T /m eff > . > . > . > . m eff > > > / > .
04 fb − jets+ /E T search for Susy, lifted straight from the source [177] ( m eff , /E T and p T in GeV). 107 eferences [1] G. ’t Hooft, NATO Adv.Study Inst.Ser.B Phys. , 135 (1980).[2] R. Kaul, Naturalness and electro-weak symmetry breaking, http://theory.tifr.res.in/stringslhc/talks/naturalness-ewsb.pdf .[3] L. Randall and R. Sundrum, Phys.Rev.Lett. , 3370 (1999), hep-ph/9905221.[4] R. Contino, Y. Nomura, and A. Pomarol, Nucl.Phys. B671 , 148 (2003),hep-ph/0306259.[5] K. Agashe, R. Contino, and A. Pomarol, Nucl.Phys.
B719 , 165 (2005),hep-ph/0412089.[6] S. P. Martin, (1997), hep-ph/9709356.[7] M. Drees, (1996), hep-ph/9611409.[8] A. Signer, J.Phys.
G36 , 073002 (2009), 0905.4630.[9] S. Coleman and J. Mandula, Phys. Rev. , 1251 (1967).[10] R. Haag, J. T. (cid:32)Lopusza´nski, and M. Sohnius, Nuclear Physics B , 257(1975).[11] D. Boulware, S. Deser, and J. Kay, Physica A: Statistical Mechanics andits Applications , 141 (1979).[12] T.-P. Hack and M. Makedonski, Phys.Lett. B718 , 1465 (2013),1106.6327.[13] G. Bertone, D. Hooper, and J. Silk, Phys.Rept. , 279 (2005), hep-ph/0404175.[14] R. Peccei and H. R. Quinn, Phys.Rev.Lett. , 1440 (1977).[15] R. Peccei and H. R. Quinn, Phys.Rev. D16 , 1791 (1977).[16] ATLAS Collaboration, G. Aad et al. , Phys.Lett.
B716 , 1 (2012),1207.7214.[17] CMS Collaboration, S. Chatrchyan et al. , Phys.Lett.
B716 , 30 (2012),1207.7235.[18] G. F. Giudice and A. Strumia, Nucl.Phys.
B858 , 63 (2012), 1108.6077.[19] G. F. Giudice, M. A. Luty, H. Murayama, and R. Rattazzi, JHEP ,027 (1998), hep-ph/9810442. 10820] J. D. Wells, (2003), hep-ph/0306127.[21] N. Arkani-Hamed and S. Dimopoulos, JHEP , 073 (2005), hep-th/0405159.[22] G. Giudice and A. Romanino, Nucl.Phys.
B699 , 65 (2004), hep-ph/0406088.[23] P. J. Fox et al. , (2005), hep-th/0503249.[24] P. Fayet and J. Iliopoulos, Phys.Lett.
B51 , 461 (1974).[25] S. Dimopoulos and H. Georgi, Nuclear Physics B , 150 (1981).[26] N. Sakai, Zeitschrift fr Physik C Particles and Fields , 153 (1981).[27] P. Bechtle et al. , (2012), 1211.1955.[28] A. Arbey, M. Battaglia, A. Djouadi, and F. Mahmoudi, (2012),1211.4004.[29] P. Bechtle et al. , (2013), 1301.2345.[30] S. Ferrara, L. Girardello, and F. Palumbo, Phys. Rev. D , 403 (1979).[31] R. Barbieri and G. Giudice, Nuclear Physics B , 63 (1988).[32] B. Allanach and M. Parker, (2012), 1211.3231.[33] R. Kitano and Y. Nomura, Phys.Lett. B631 , 58 (2005), hep-ph/0509039.[34] M. Papucci, J. T. Ruderman, and A. Weiler, JHEP , 035 (2012),1110.6926.[35] CERN Report No. ATLAS-CONF-2012-109, 2012 (unpublished).[36] H. K. Dreiner, M. Kramer, and J. Tattersall, Europhys.Lett. , 61001(2012), 1207.1613.[37] G. Belanger, M. Heikinheimo, and V. Sanz, JHEP , 151 (2012),1205.1463.[38] B. Allanach and B. Gripaios, JHEP , 062 (2012), 1202.6616.[39] S. E. Hedri, A. Hook, M. Jankowiak, and J. G. Wacker, (2013),1302.1870.[40] D. Curtin, R. Essig, and B. Shuve, (2012), 1210.5523.[41] J. Thaler and K. Van Tilburg, JHEP , 015 (2011), 1011.2268.10942] J. Gallicchio and M. D. Schwartz, Phys.Rev.Lett. , 022001 (2010),1001.5027.[43] G. D. Kribs and A. Martin, Phys.Rev. D85 , 115014 (2012), 1203.4821.[44] CMS Collaboration, S. Chatrchyan et al. , (2013), 1303.2985.[45] CERN Report No. CMS-PAS-SUS-12-023, 2012 (unpublished).[46] CERN Report No. ATLAS-CONF-2013-037, 2013 (unpublished).[47] CERN Report No. ATLAS-CONF-2012-166, 2012 (unpublished).[48] S. Dimopoulos and G. Giudice, Phys.Lett.
B357 , 573 (1995), hep-ph/9507282.[49] A. G. Cohen, D. Kaplan, and A. Nelson, Phys.Lett.
B388 , 588 (1996),hep-ph/9607394.[50] C. Brust, A. Katz, S. Lawrence, and R. Sundrum, JHEP , 103(2012).[51] N. Craig, M. McCullough, and J. Thaler, JHEP , 046 (2012),1203.1622.[52] N. Craig, S. Dimopoulos, and T. Gherghetta, JHEP , 116 (2012),1203.0572.[53] C. Wymant, Phys.Rev.
D86 , 115023 (2012), 1208.1737.[54] L. J. Hall, D. Pinner, and J. T. Ruderman, JHEP , 131 (2012),1112.2703.[55] H. Baer, V. Barger, and A. Mustafayev, (2011), 1112.3017.[56] S. Heinemeyer, O. Stal, and G. Weiglein, Phys.Lett.
B710 , 201 (2012),1112.3026.[57] A. Arbey, M. Battaglia, A. Djouadi, F. Mahmoudi, and J. Quevillon,Phys.Lett.
B708 , 162 (2012), 1112.3028.[58] P. Draper, P. Meade, M. Reece, and D. Shih, Phys.Rev.
D85 , 095007(2012), 1112.3068.[59] M. Carena, S. Gori, N. R. Shah, and C. E. Wagner, JHEP , 014(2012), 1112.3336.[60] J. Cao, Z. Heng, D. Li, and J. M. Yang, Phys.Lett.
B710 , 665 (2012),1112.4391.[61] Z. Kang, J. Li, and T. Li, JHEP , 024 (2012), 1201.5305.11062] N. Desai, B. Mukhopadhyaya, and S. Niyogi, (2012), 1202.5190.[63] J.-J. Cao, Z.-X. Heng, J. M. Yang, Y.-M. Zhang, and J.-Y. Zhu, JHEP , 086 (2012), 1202.5821.[64] H. M. Lee, V. Sanz, and M. Trott, JHEP , 139 (2012), 1204.0802.[65] N. D. Christensen, T. Han, and S. Su, Phys.Rev.
D85 , 115018 (2012),1203.3207.[66] F. Brummer, S. Kraml, and S. Kulkarni, JHEP , 089 (2012),1204.5977.[67] M. Badziak, E. Dudas, M. Olechowski, and S. Pokorski, (2012),1205.1675.[68] M. W. Cahill-Rowley, J. L. Hewett, A. Ismail, and T. G. Rizzo, (2012),1206.5800.[69] A. Arbey, M. Battaglia, A. Djouadi, and F. Mahmoudi, (2012),1207.1348.[70] H. Baer, V. Barger, P. Huang, A. Mustafayev, and X. Tata, (2012),1207.3343.[71] S. Antusch, L. Calibbi, V. Maurer, M. Monaco, and M. Spinrath, (2012),1207.7236.[72] S. P. Martin and M. T. Vaughn, Phys.Rev.
D50 , 2282 (1994), hep-ph/9311340.[73] P. Meade, N. Seiberg, and D. Shih, Prog.Theor.Phys.Suppl. , 143(2009), 0801.3278.[74] G. Gamberini, G. Ridolfi, and F. Zwirner, Nuclear Physics B , 331(1990).[75] R. Arnowitt and P. Nath, Phys. Rev. D , 3981 (1992).[76] B. de Carlos and J. Casas, Phys.Lett. B309 , 320 (1993), hep-ph/9303291.[77] M. S. Carena, J. Espinosa, M. Quiros, and C. Wagner, Phys.Lett.
B355 ,209 (1995), hep-ph/9504316.[78] A. Brignole, Phys.Lett.
B281 , 284 (1992).[79] M. Frank et al. , Journal of High Energy Physics , 047 (2007).[80] G. Degrassi, S. Heinemeyer, W. Hollik, P. Slavich, and G. Weiglein,Eur.Phys.J.
C28 , 133 (2003), hep-ph/0212020.11181] S. Heinemeyer, W. Hollik, and G. Weiglein, Eur.Phys.J. C9 , 343 (1999),hep-ph/9812472.[82] S. Heinemeyer, W. Hollik, and G. Weiglein, Comput.Phys.Commun. , 76 (2000), hep-ph/9812320.[83] CERN Report No. ATLAS-CONF-2013-014, 2013 (unpublished).[84] CERN Report No. CMS-PAS-HIG-12-045, 2012 (unpublished).[85] G. Degrassi et al. , (2012), 1205.6497.[86] S. Alekhin, A. Djouadi, and S. Moch, (2012), 1207.0980.[87] F. Bezrukov, M. Y. Kalmykov, B. A. Kniehl, and M. Shaposhnikov,(2012), 1205.2893.[88] B. Allanach, A. Djouadi, J. Kneur, W. Porod, and P. Slavich, JHEP , 044 (2004), hep-ph/0406166.[89] U. Haisch and F. Mahmoudi, JHEP , 061 (2013), 1210.7806.[90] Z. Kang, T. Li, T. Liu, C. Tong, and J. M. Yang, (2012), 1203.2336.[91] N. Craig, S. Knapen, D. Shih, and Y. Zhao, (2012), 1206.4086.[92] K. Agashe, Y. Cui, and R. Franceschini, (2012), 1209.2115.[93] D. Albornoz V´asquez et al. , Phys. Rev. D , 035023 (2012).[94] U. Ellwanger, C. Hugonie, and A. M. Teixeira, Phys.Rept. , 1 (2010),0910.1785.[95] A. Vilenkin, Phys.Rept. , 263 (1985).[96] R. Barbieri, L. J. Hall, Y. Nomura, and V. S. Rychkov, Phys.Rev. D75 ,035007 (2007), hep-ph/0607332.[97] T. Gherghetta, B. von Harling, A. D. Medina, and M. A. Schmidt, JHEP , 032 (2013), 1212.5243.[98] D. A. Vasquez, G. Belanger, C. Boehm, A. Pukhov, and J. Silk,Phys.Rev.
D82 , 115027 (2010), 1009.4380.[99] D. Albornoz Vasquez, G. Belanger, and C. Boehm, Phys.Rev.
D84 ,095008 (2011), 1107.1614.[100] D. Albornoz Vasquez, G. Belanger, J. Billard, and F. Mayet, Phys.Rev.
D85 , 055023 (2012), 1201.6150.112101] CoGeNT collaboration, C. Aalseth et al. , Phys.Rev.Lett. , 131301(2011), 1002.4703.[102] DAMA Collaboration, LIBRA Collaboration, R. Bernabei et al. ,Eur.Phys.J.
C67 , 39 (2010), 1002.1028.[103] G. Belanger et al. , Comput.Phys.Commun. , 842 (2011), 1004.1092.[104] U. Ellwanger and C. Hugonie, Comput.Phys.Commun. , 399 (2007),hep-ph/0612134.[105] U. Ellwanger, J. F. Gunion, and C. Hugonie, JHEP , 066 (2005),hep-ph/0406215.[106] U. Ellwanger and C. Hugonie, Comput.Phys.Commun. , 290 (2006),hep-ph/0508022.[107] WMAP Collaboration, E. Komatsu et al. , Astrophys.J.Suppl. , 330(2009), 0803.0547.[108] C. Skordis, D. Mota, P. Ferreira, and C. Boehm, Phys.Rev.Lett. ,011301 (2006), astro-ph/0505519.[109] XENON100 Collaboration, E. Aprile et al. , Phys.Rev.Lett. , 131302(2011), 1104.2549.[110] Fermi-LAT Collaboration, A. Abdo et al. , Astrophys.J. , 147 (2010),1001.4531.[111] L. E. Strigari, S. M. Koushiappas, J. S. Bullock, and M. Kaplinghat,Phys.Rev. D75 , 083526 (2007), astro-ph/0611925.[112] C. Boehm, T. Ensslin, and J. Silk, J.Phys.
G30 , 279 (2004), astro-ph/0208458.[113] C. Boehm, J. Silk, and T. Ensslin, (2010), 1008.5175.[114] D. Grellscheid, J. Jaeckel, V. V. Khoze, P. Richardson, and C. Wymant,JHEP , 078 (2012), 1111.3365.[115] W. Beenakker, R. Hopker, and M. Spira, (1996), hep-ph/9611232.[116] W. Beenakker, R. Hopker, M. Spira, and P. Zerwas, Nucl.Phys.
B492 ,51 (1997), hep-ph/9610490.[117] W. Beenakker et al. , Phys.Rev.Lett. , 3780 (1999), hep-ph/9906298.[118] M. Spira, p. 217 (2002), hep-ph/0211145.[119] T. Plehn, Czech.J.Phys. , B213 (2005), hep-ph/0410063.113120] M. Bahr et al. , Eur.Phys.J. C58 , 639 (2008), 0803.0883.[121] S. Gieseke et al. , (2011), 1102.1672.[122] A. Buckley et al. , (2010), 1003.0694.[123] D. Das, U. Ellwanger, and A. M. Teixeira, JHEP , 067 (2012),1202.5244.[124] M. Spannowsky and C. Wymant, Phys. Rev. D , 074004 (2013),1301.0345.[125] U. Ellwanger, JHEP , 044 (2012), 1112.3548.[126] S. King, M. Muhlleitner, and R. Nevzorov, Nucl.Phys. B860 , 207 (2012),1201.2671.[127] J. F. Gunion, Y. Jiang, and S. Kraml, Phys.Lett.
B710 , 454 (2012),1201.0982.[128] G. Brooijmans et al. , (2012), 1203.1488.[129] P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, and K. E. Williams,Comput.Phys.Commun. , 138 (2010), 0811.4169.[130] P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, and K. E. Williams,Comput.Phys.Commun. , 2605 (2011), 1102.1898.[131] .[132] M. Lisanti and J. G. Wacker, Phys.Rev.
D79 , 115006 (2009), 0903.1377.[133] B. Bellazzini, C. Csaki, A. Falkowski, and A. Weiler, Phys.Rev.
D80 ,075008 (2009), 0906.3026.[134] A. Falkowski, D. Krohn, L.-T. Wang, J. Shelton, and A. Thalapillil,Phys.Rev.
D84 , 074022 (2011), 1006.1650.[135] C.-R. Chen, M. M. Nojiri, and W. Sreethawong, JHEP , 012 (2010),1006.1151.[136] J. Forshaw, J. Gunion, L. Hodgkinson, A. Papaefstathiou, and A. Pilk-ington, JHEP , 090 (2008), 0712.3510.[137] C. Englert, T. S. Roy, and M. Spannowsky, Phys.Rev.
D84 , 075026(2011), 1106.4545.[138] I. Lewis and J. Schmitthenner, JHEP , 072 (2012), 1203.5174.[139] C. Englert, M. Spannowsky, and C. Wymant, Phys.Lett.
B718 , 538(2012), 1209.0494. 114140] L. J. Hall and M. B. Wise, Nucl.Phys.
B187 , 397 (1981).[141] U. Ellwanger, Phys.Lett.
B698 , 293 (2011), 1012.1201.[142] CERN Report No. ATLAS-CONF-2013-012, 2013 (unpublished).[143] M. Kado, Status of atlas higgs results, https://lpsc.in2p3.fr/Indico/getFile.py/access?contribId=0&sessionId=0&resId=0&materialId=slides&confId=861 .[144] CERN Report No. CMS-PAS-HIG-13-001, 2013 (unpublished).[145] G. Belanger, C. Hugonie, and A. Pukhov, JCAP , 023 (2009),0811.3224.[146] A. E. Nelson and N. Seiberg, Nucl.Phys.
B416 , 46 (1994), hep-ph/9309299.[147] D. Shih, JHEP , 091 (2008), hep-th/0703196.[148] L. O’Raifeartaigh, Nucl.Phys.
B96 , 331 (1975).[149] Z. Kang, T. Li, and Z. Sun, (2012), 1209.1059.[150] J. Jaeckel, Nuclear Physics A , 83c (2009).[151] J. Jaeckel and A. Ringwald, Ann.Rev.Nucl.Part.Sci. , 405 (2010),1002.0329.[152] Particle Data Group, J. Beringer et al. , Phys.Rev. D86 , 010001 (2012).[153] K. A. Intriligator, N. Seiberg, and D. Shih, JHEP , 021 (2006),hep-th/0602239.[154] M. Buican, P. Meade, N. Seiberg, and D. Shih, JHEP , 016 (2009),0812.3668.[155] C. Cheung, A. L. Fitzpatrick, and D. Shih, JHEP , 054 (2008),0710.3585.[156] M. Dine and A. E. Nelson, Phys.Rev. D48 , 1277 (1993), hep-ph/9303230.[157] M. Dine, A. E. Nelson, and Y. Shirman, Phys.Rev.
D51 , 1362 (1995),hep-ph/9408384.[158] M. Dine, A. E. Nelson, Y. Nir, and Y. Shirman, Phys.Rev.
D53 , 2658(1996), hep-ph/9507378.[159] G. Giudice and R. Rattazzi, Nucl.Phys.
B511 , 25 (1998), hep-ph/9706540. 115160] S. Dimopoulos and G. Giudice, Phys.Lett.
B393 , 72 (1997), hep-ph/9609344.[161] G. Giudice and R. Rattazzi, Phys.Rept. , 419 (1999), hep-ph/9801271.[162] L. M. Carpenter, M. Dine, G. Festuccia, and J. D. Mason, Phys. Rev.D , 035002 (2009).[163] L. M. Carpenter, (2008), 0812.2051.[164] C. D. Carone and H. Murayama, Phys.Rev. D53 , 1658 (1996), hep-ph/9510219.[165] J. Jaeckel, V. V. Khoze, and C. Wymant, JHEP , 126 (2011),1102.1589.[166] J. Jaeckel, V. V. Khoze, and C. Wymant, JHEP , 132 (2011),1103.1843.[167] B. Allanach, Comput.Phys.Commun. , 305 (2002), hep-ph/0104145.[168] M. Carena, P. Draper, N. R. Shah, and C. E. Wagner, Phys.Rev.
D82 ,075005 (2010), 1006.4363.[169] ATLAS Collaboration, ATLAS: Detector and physics performance tech-nical design report. Volume 2, 1999.[170] I. Fridman-Rojas and P. Richardson, (2012), 1208.0279.[171] F. Englert and R. Brout, Phys.Rev.Lett. , 321 (1964).[172] P. W. Higgs, Phys.Lett. , 132 (1964).[173] P. W. Higgs, Phys.Rev.Lett. , 508 (1964).[174] S. Deser and B. Zumino, Phys.Rev.Lett. , 1433 (1977).[175] E. Cremmer et al. , Phys.Lett. B79 , 231 (1978).[176] S. Abel, M. J. Dolan, J. Jaeckel, and V. V. Khoze, JHEP , 049(2010), 1009.1164.[177] ATLAS Collaboration, G. Aad et al. , Phys.Lett.
B710 , 67 (2012),1109.6572.[178] LEP Working Group for Higgs boson searches, ALEPH Collabora-tion, DELPHI Collaboration, L3 Collaboration, OPAL Collaboration,R. Barate et al. , Phys.Lett.
B565 , 61 (2003), hep-ex/0306033.[179] S. Akula et al. , Phys.Lett.
B699 , 377 (2011), 1103.1197.116180] J. Jaeckel, V. V. Khoze, T. Plehn, and P. Richardson, Phys.Rev.
D85 ,015015 (2012), 1109.2072.[181] M. J. Dolan, D. Grellscheid, J. Jaeckel, V. V. Khoze, and P. Richardson,JHEP , 095 (2011), 1104.0585.[182] .[183] M. Cacciari, G. P. Salam, and G. Soyez, Eur.Phys.J.
C72 , 1896 (2012),1111.6097.[184] P. Grajek, A. Mariotti, and D. Redigolo, (2013), 1303.0870.[185] R. Dermisek and H. D. Kim, Phys.Rev.Lett. , 211803 (2006), hep-ph/0601036.[186] J. R. Ellis, J. Giedt, O. Lebedev, K. Olive, and M. Srednicki, Phys.Rev. D78 , 075006 (2008), 0806.3648.[187] V. D. Barger, T. Han, and J. Ohnemus, Phys.Rev.
D37 , 1174 (1988).[188] C. Lester and D. Summers, Phys.Lett.
B463 , 99 (1999), hep-ph/9906349.[189] C. Rogan, (2010), 1006.2727.[190] B. Allanach, C. Lester, M. A. Parker, and B. Webber, JHEP , 004(2000), hep-ph/0007009.[191] T. Han, p. 407 (2005), hep-ph/0508097.[192] Y. Kats, P. Meade, M. Reece, and D. Shih, JHEP , 115 (2012),1110.6444.[193] C. Cheung, Y. Nomura, and J. Thaler, JHEP , 073 (2010),1002.1967.[194] R. Argurio, Z. Komargodski, and A. Mariotti, Phys.Rev.Lett. ,061601 (2011), 1102.2386.[195] R. Argurio et al. , JHEP , 096 (2012), 1112.5058.[196] M. Baryakhtar, N. Craig, and K. Van Tilburg, JHEP , 164 (2012),1206.0751.[197] C. Macesanu, C. McMullen, and S. Nandi, Phys.Rev.
D66 , 015009(2002), hep-ph/0201300.[198] T. Cohen, A. Hook, and B. Wecht, Phys.Rev.
D85 , 115004 (2012),1112.1699. 117199] T. Plehn, D. L. Rainwater, and D. Zeppenfeld, Phys.Rev.
D61 , 093005(2000), hep-ph/9911385.[200] D. Goncalves-Netto, D. Lopez-Val, K. Mawatari, I. Wigmore, andT. Plehn, (2013), 1303.0845.[201] CMS Collaboration, Physics Analysis Summary CMS-PAS-SUS-12-018.[202] J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, and T. Stelzer, JHEP , 128 (2011), 1106.0522.[203] T. Sjostrand, S. Mrenna, and P. Z. Skands, JHEP , 026 (2006),hep-ph/0603175.[204] J. Conway et al , .[205] M. Cacciari, G. P. Salam, and G. Soyez, JHEP , 063 (2008),0802.1189.[206] ATLAS Collaboration, G. Aad et al. , Phys.Lett. B718 , 411 (2012),1209.0753.[207] C. G. Lester, JHEP , 076 (2011), 1103.5682.[208] G. D. Kribs, A. Martin, T. S. Roy, and M. Spannowsky, Phys.Rev.
D81 ,111501 (2010), 0912.4731.[209] J. Alwall et al. , Eur.Phys.J.
C53 , 473 (2008), 0706.2569.[210] M. Schmaltz and D. Tucker-Smith, Ann.Rev.Nucl.Part.Sci. , 229(2005), hep-ph/0502182.[211] R. Contino, (2010), 1005.4269.[212] H.-C. Cheng, (2007), 0710.3407.[213] P. Batra and D. E. Kaplan, JHEP , 028 (2005), hep-ph/0412267.[214] L. Ibanez and C. L´opez, Nuclear Physics B , 511 (1984).[215] R. Essig and J.-F. Fortin, JHEP , 073 (2008), 0709.0980.[216] M. S. Carena, P. H. Chankowski, M. Olechowski, S. Pokorski, andC. Wagner, Nucl.Phys. B491 , 103 (1997), hep-ph/9612261.[217] T. Junk, Nucl.Instrum.Meth.
A434 , 435 (1999), hep-ex/9902006.[218] A. L. Read, J.Phys.