Mass Determination in SUSY-like Events with Missing Energy
Hsin-Chia Cheng, John F. Gunion, Zhenyu Han, Guido Marandella, Bob McElrath
aa r X i v : . [ h e p - ph ] D ec Preprint typeset in JHEP style - HYPER VERSION
Mass Determination in SUSY-like Events withMissing Energy
Hsin-Chia Cheng, John F. Gunion, Zhenyu Han, Guido Marandella, and BobMcElrath
Department of Physics, University of California, Davis, CA 95616E-mail: [email protected], [email protected],[email protected], [email protected],[email protected]
Abstract:
We describe a kinematic method which is capable of determining the over-all mass scale in SUSY-like events at a hadron collider with two missing (dark matter)particles. We focus on the kinematic topology in which a pair of identical particles isproduced with each decaying to two leptons and an invisible particle (schematically, pp → Y Y + jets followed by each Y decaying via Y → ℓX → ℓℓ ′ N where N is invisible).This topology arises in many SUSY processes such as squark and gluino production anddecay, not to mention tt di-lepton decays. In the example where the final state leptonsare all muons, our errors on the masses of the particles Y , X and N in the decay chainrange from 4 GeV for 2000 events after cuts to 13 GeV for 400 events after cuts. Errorsfor mass differences are much smaller. Our ability to determine masses comes fromconsidering all the kinematic information in the event, including the missing momen-tum, in conjunction with the quadratic constraints that arise from the Y , X and N mass-shell conditions. Realistic missing momentum and lepton momenta uncertaintiesare included in the analysis. ontents
1. Introduction 12. Topology of events with missing energy 93. Idealized case: Perfect resolution, no combinatorics and no back-ground 114. Inclusion of combinatorics, finite resolutions and backgrounds 14
5. Other processes and mass points 25
6. Summary and Discussion 30A. Solution Procedure 33B. SUSY points 36
B.1 Point I 37B.2 Point II 38B.3 Point III 38B.4 Point IV: SPS1a 38
1. Introduction
As the Large Hadron Collider (LHC) is near completion, we will soon be able to fullyexplore TeV scale physics. Because of the naturalness problem for the Higgs bosonin the context of the Standard Model (SM), it is strongly believed that new physicsbeyond the SM should appear at or below the TeV scale. There are many possible– 1 –andidates for TeV-scale physics beyond the Standard Model, giving rise to variousexperimental signatures at the LHC. If some new signal is discovered, it is vital todetermine the masses and spins of the new particles in order to fully reconstruct thepicture of the TeV scale.Some new physics will be easily identified. For example, if there is a Z ′ gaugeboson accessible at the LHC, one can easily find it by looking for the resonance in theinvariant mass distributions of its decay products, e.g., a pair of leptons or jets. Ingeneral, if the decays of a new particle involve only visible particles, one can search forit by looking for a bump in various invariant mass combinations of the visible particlesand the location of the bump determines the mass of the new particle. On the otherhand, if the decays of a new particle always contain one or more invisible particles, thesearch for the new particle becomes more complicated, as there is no “bump” to lookfor. In order to detect new physics in such a case, it is necessary to understand theSM backgrounds very well and to then look for excesses above them. Determining themasses of the new particles will also be challenging since we cannot directly measurethe energy carried away by the invisible particles. Absent good mass determinations,it will be difficult to reconstruct a full picture of the TeV scale even after new physicsis discovered.A scenario with missing particles is highly motivated for TeV scale physics, inde-pendent of the hierarchy problem. If we assume that dark matter is the thermal relicof some weakly interacting massive particles (WIMPs) left from the Big Bang, thenthe right amount of dark matter in the universe is obtained for a WIMP mass in the0.1–1 TeV range under the assumption that the electroweak sector mediates the darkmatter—SM interaction. The dark matter particle must be electrically neutral andstable on cosmological time scales. If it is produced at a collider, because it is weaklyinteracting it will escape the detector without being detected, giving missing energysignals. In order for the dark matter particle to be stable, it is likely that there is anew symmetry under which the dark matter particle transforms but all SM particlesare neutral, thereby preventing decay of the dark matter particle to SM particles.LEP has indirectly tested physics beyond the SM. The electroweak precision fitand 4-Fermi contact interaction constraints exclude new particles with masses below O (TeV) if they are exchanged at tree level, unless their coupling to the SM fermionsis suppressed. If there is a symmetry under which the new particles are odd and theSM particles are even, then the new particles can only contribute to the electroweakobservables at the loop level. In this case, the bound on the mass of the new particlesdecreases by about a loop factor, m → m/ π , making the existence of new particleswith masses of order a few hundreds of GeV compatible with the data. The messagecoming from the LEP data is that, if there is any new physics responsible for stabilizing– 2 –he electroweak scale, it is very likely to possess such a new symmetry. Thus, thecosmological evidence for dark matter together with the LEP data provide very strongmotivation for new particles at or below the TeV scale that are pair produced ratherthan singly produced.Almost all the models with dark matter candidates contain additional particlescharged under the new symmetry. At a collider, these new particles must also be pair-produced, and if they are heavier than the dark matter particle, they will cascade decaydown to it. In many cases, this cascade radiates SM particles in a series of A → Bc ,1 → A and B are new physics particles while c is a SM particle.(In some cases, phase space restrictions force one of the new particles off-shell and A → B ∗ c → Cdc , 1 → R -parity conservation impliesthat the Lightest Supersymmetric Particle (LSP) is stable. In most supersymmetricmodels the LSP is the lightest neutralino, which is a good dark matter candidate.It appears at the end of every supersymmetric particle decay chain and escapes thedetector. All supersymmetric particles are produced in pairs, resulting in at least twomissing particles in each event.Other theories of TeV-scale physics with dark matter candidates have been re-cently proposed. They have experimental signatures very similar to SUSY: i.e. multi-ple leptons and/or jets plus missing energy. For instance, Universal Extra Dimensions(UEDs) [1, 2], little Higgs theories with T -parity (LHT) [3], and warped extra dimen-sions with a Z parity [4] belong to this category of models. Being able to reconstructevents with missing energy is thus an important first step to distinguish various sce-narios and establish the underlying theory.Of particular importance will be the determination of the absolute masses of thenew particles, including the dark matter particle. First, these masses are needed inorder to determine the underlying theory. For example, in the case of SUSY, accurateparticle masses are needed to determine the SUSY model parameters, in particularthe low-scale soft-SUSY-breaking parameters. These in turn can be evolved to theunification scale (under various assumptions, such as no intermediate-scale matter) tosee if any of the proposed GUT-scale model patterns emerge. The accuracy requiredat the GUT-scale after evolution implies that low-scale masses need to be determinedwith accuracies of order a few GeV. Second, the mass of the dark matter particle, andthe masses of any other particles with which it can coannihilate, need to be determinedin order to be able to compute the dark matter relic density in the context of a given– 3 –odel. Studies [5] suggest that the required accuracy is of order a few GeV. A veryimportant question is then whether or not the LHC can achieve such accuracy or will itbe necessary to wait for threshold scan data from the ILC. One goal of this paper willbe to find techniques for determining the dark matter particle mass at the LHC withan accuracy that is sufficient for a reasonably precise computation of the relic density.Most of the SUSY studies carried out thus far have relied on long decay chainsof super-particles which produce many jets and/or leptons and large missing energy.Several kinematic variables have been proposed as estimators of the super-particle massscale, such as E T , H T , M eff [6], and M T [7]. However, these variables measure themass differences between the super-particles, but not the overall mass scale.One possible means for determining the overall mass scale is to employ the totalcross section. However, the total cross section is very model dependent: it depends onthe couplings, the species of the particles being produced, e.g., fermions or bosons, aswell as the branching fractions of the decays involved in the process. One needs to havealready determined the spins and branching ratios for this to be reliable, a task that isdifficult or impossible at the LHC without an ability to determine the four-momentaof all the particles involved in the process. To fully test a potential model, we mustfirst determine the masses of the produced particles using only kinematic information.Once the masses are known, there are many chain decay configurations for which itwill be possible to use these masses to determine the four-momenta of all the particleson an event-by-event basis. The four-momenta can then be employed in computingthe matrix element squared for different possible spin assignments. In this way, a spindetermination may be possible, and then the cross section information can be used todistinguish different models.In recent years there have been numerous studies of how to measure the super-partner masses just based on kinematic information [6, 8, 9, 10, 11, 12, 13, 14, 15, 16].These studies rely on long decay chains of super-particles, usually requiring 3 or morevisible particles in the decay chain in order to have enough invariant mass combinationsof the visible particles. One can then examine the kinematic edges of the distributionsof the various invariant mass combinations, and obtain the masses from the relationsbetween the end points of the distributions and the masses. Many of these studies usethe decay chain ˜ q → ˜ χ q → ˜ ℓℓq → ˜ χ ℓℓq (Fig. 1) that occurs for the benchmark pointSPS1a [17], for which m e χ ∼
97 GeV, m e ℓ ∼
143 GeV, m e χ ∼
180 GeV, m e b ∼
570 GeVand m e g ∼
610 GeV, see Appendix B. The kinematic endpoints of the invariant massdistributions, m ℓℓ , m qℓℓ , m qℓ (high) , and m qℓ (low)1 , depend on the masses of the super- High and low represent the largest and the smallest values of m qℓ , respectively — these massesare employed since it is not possible to determine the order in which the observed leptons appear inthe chain decay. – 4 – igure 1: A decay chain in SUSY. particles in the decay chain through some complicated relations [9, 10, 18]. If the endpoints of these distributions can be accurately determined from the experimental data,we can invert the relations to obtain the masses of the super-particles.For the decay chain of Fig. 1 and the specific model points studied, this approachcan give a reasonable determination of the masses of the super-particles, but there isroom for improvement. In some of the studies, it is only mass differences that arewell determined whereas the overall mass scale is rather uncertain. For one of themass points studied in [10, 18] (labeled α in [10]), a very large number of events isemployed and the overall mass scale uncertainty is reduced to two discrete choices,one corresponding to the correct solution (with rms error relative to the central valueof order 4 GeV) and the other (somewhat less probable value) shifted by about 10GeV. For the mass choices labeled β , for which the event rate is lower, there are anumber of discrete solutions and each one has fairly large rms error for the absolutemass scale. However, it should be noted that in reducing the solutions to a number ofdiscrete choices, not only were the locations of the kinematic edges employed, but alsothe shapes of the distributions of the mass combinations were employed. These lattershapes depend upon their choice of model being correct. It is possible that withoutthis information there would have been a significant continuous range of overall possiblemass scale.Another mass determination method is that proposed by Kawagoe, Nojiri, andPolesello [12]. Their method relies on an even longer decay chain, ˜ g → ˜ bb → ˜ χ b b → ˜ ℓb b ℓ → ˜ χ b b ℓ ℓ . There are five mass shell conditions and for each event there arethe four unknowns due to the unobservable 4-momentum of the ˜ χ . In principle, beforeintroducing combinatorics and experimental resolution, one can then find a discreteset of solutions in the space of the 5 on-shell masses as intersections of the constraintscoming from just five events. In practice, combinatorics and resolution complicate the– 5 –icture. In their actual analysis, they only fitted the gluino and the sbottom masses with the assumption that the masses of ˜ χ , ˜ ℓ , and ˜ χ are already known . For thestandard SPS1a point, they achieved accuracies for m ˜ g and m ˜ b of order a few GeV,but with central values systematically shifted (upward) by about 4 GeV. In a follow upstudy [13], Lester discusses a procedure for using all 5 on-shell mass constraints. For arelatively small number of events and without considering the combinatorics associatedwith the presence of two chains, he finds a 17% error in the determination of m e χ .In addition to the above studies, a series of contributions concerning mass deter-mination appeared in [11]. These latter studies focused on the SPS1a point and againemployed the kinematic edges of the various reconstructable mass distributions in the˜ g → ˜ bb → ˜ χ b b → ˜ ℓb b ℓ → ˜ χ b b ℓ ℓ decay chain to determine the underlyingsparticle masses. Experimental resolutions for the jets, leptons and missing energybased on ATLAS detector simulations were employed. The resulting final errors forLHC/ATLAS are quoted in Table 5.1.4 of [11], assuming an integrated luminosity of300 fb − and after using both e e and e µ intermediate resonances (assuming they are de-generate in mass). We have since verified with several ATLAS members that the quotederrors do indeed correspond to ± σ errors [19]. The tabulated errors for m e χ , m e ℓ , m e χ are all of order 5 GeV, while those for m e b and m e g are of order 8 GeV.In all of the studies referenced above, the methods employed required at least threevisible particles in the decay chain, and, in the last cases above, four visible particles(two b ’s and two ℓ ’s). We will study the seemingly much more difficult case in whichwe make use of only the last two visible particles in each decay chain. (For example,the subcase of Fig. 1 in which only the e χ → ℓℓ e χ portion of each decay chain is em-ployed.) In this case, if only the isolated chain-decays are analyzed, the one invariantmass combination that can be computed from the two visible 4-momenta does not con-tain enough information to determine the three masses ( m e χ , m e ℓ and m e χ ) involved inthe decay chain. Thus, we pursue an alternative approach which employs both decaychains in the event at once. For the SPS1a point, our method allows a determinationof the masses m e χ , m e ℓ and m e χ with an accuracy of ∼ ± e e and e µ intermediate slepton states (again, taken to be degenerate in mass), assum-ing L = 300 fb − and adopting the ATLAS expectations for the resolutions for leptonmomentum and missing momentum measurements. (These resolutions affect the de-termination of the crucial transverse momentum of the 4 ℓ + 2 e χ system. In particular,by looking at only the leptonic part of the decay chains we can avoid considering indi-vidual jet momenta, and therefore we are less sensitive to imprecise measurements forthe individual jets.) In short, using only the leptons in the final state, we obtain anaccuracy that is very comparable to the ∼ ± etc. have been employed so that both decay chains in each event involve thesame decaying resonances, all the way back to the e g . In our approach it is unnecessaryto know exactly what resonances appear prior to the e χ ’s in the two decay chains. Thus,some of the e χ pair events could come from direct e q production and some indirectlyfrom e g production followed by e g → q e q decay. We also do not need to tag the b quarks.We only need to measure to determine the transverse momentum of the e χ e χ pair usingthe measured lepton momenta and the measured missing momentum. Nonetheless, wedo need to isolate a sample of events dominated by two final e χ → ℓ e ℓ → ℓℓ e χ decays.(Of course, it is interesting to go beyond this assumption, but we will not do so in thispaper.) The key to mass determination using the more limited information we employis to consider the whole event at once and look not for edges in masses reconstructedfrom visible momenta but for sharp transitions in the number of events consistent withthe assumed topology after an appropriate reconstruction procedure. Further, as notedlater, if the events we isolate do not correspond to a e χ e χ pair decaying in the mannerassumed, then our procedure will yield smooth distributions in the number of recon-structed events, as opposed to the sharp transitions predicted if we have indeed isolatedan enriched e χ e χ -pair sample with decays as presumed.Beginning with the general topology illustrated in Fig. 2, we employ the informa-tion coming from correlations between the two decay chains in the same event, and themissing momentum measurement. This is evident from some simple constraint count-ing. Each event satisfying the topology of Fig. 2 has the two invisible 4-momenta of the N and N ′ . The sum of the transverse momenta of N and N ′ is, however, constrained toequal the negative of the sum of the transverse momenta of the visible particles, leavingus with 6 unknowns for each event, subject to 6 on-shell mass constraints. Under theassumption that m Y = m Y ′ , m X = m X ′ and m N = m N ′ , we are left with the three un-known masses, m Y , m X and m N . Every event will be compatible with a certain regionin the 3-dimensional { m Y , m X , m N } space. Combining several events will shrink thisregion. We will show that before the inclusion of combinatorics and resolution effectsthe actual values of the masses lie at the end point of such a region. Mass determinationafter including combinatoric and resolution effects requires an examination of how thenumber of events consistent with given mass choices changes as the masses are shifted.In our approach, we find that it is important to not focus on the individual invariantmass distributions, as this would not utilize all the information contained in the data.Instead, we examine the events from a more global point of view and try to use all thekinematic information contained in the data to determine the masses of the particlesinvolved. In the case where we have of order 2000 events available after cuts, and after We note that the fact that each event defines a region in mass space was also the case for the – 7 – igure 2:
The event topology we consider. including combinatorics and resolutions for missing momentum and lepton momentummeasurements according to ATLAS detector expectations, we achieve rms accuracieson m Y , m X and m N of order 4 GeV, with a small systematic shift that can be easilycorrected for. This assumes a case with significant separation between the three masses.This result is fairly stable when backgrounds are included so long as S/B > ∼
2. Thisnumber of events and resulting error apply in particular to the SPS1a point assumingintegrated luminosity of L = 300 fb − and use of all e ℓ = e e or e µ channels.The organization of the paper is as follows. In Sec. 2, we give a detailed expo-sition regarding solving the topology of Fig. 2. In Sec. 3, we demonstrate how themasses of the Y , X and N particles in Fig. 2 can be very precisely determined aftera reasonable number of events ( e.g. Z → Y → X → N one-sided chain situation outlined earlier (except for the mass space being 4-dimensional). An interesting question is whether our more general approach would determine theabsolute mass scale in the one-sided case, as opposed to just mass differences. A detailed study isrequired. – 8 –sing only the kinematic information contained in the available events. In Sec. 4.2, wediscuss the effects of having background events mixed with the signal events. In Sec. 5,we discuss two alternative scenarios: one with very different m Y − m X compared tothe first point analyzed, and one with m N ∼
0. In Sec. 5.3, we consider in detail theSPS1a mSUGRA point. We summarize and present additional discussion in Sec. 6.
2. Topology of events with missing energy
We study the collider events with topology shown in Fig. 2. A hard hadronic collisionproduces two identical or mirrored chains. Each decay chain gives two visible particlesand one missing particle. It will be convenient to label the 6 final outgoing particlesfrom 1–6, with N = 1, N ′ = 2, visible particles 3 and 5 emitted from the Y chain andvisible particles 4 and 6 emitted from the Y ′ chain. There are many processes whichhave this topology. For example, t ¯ t production, with t decaying to bW and W decayingleptonically to ℓν , is exactly described by this topology, so it can be studied with ourmethod, except that we already know that neutrinos are (to a good approximation)massless. There are also many SUSY or other beyond the SM processes which can bedescribed by this topology, e.g., second neutralino pair production ˜ χ ˜ χ (through t -channel squark exchange) with ˜ χ → ℓ ˜ ℓ and then ˜ ℓ → ℓ ˜ χ , producing 4 visible chargedleptons and 2 missing particles. As already noted, we require that the masses of thecorresponding particles in the two chains be the same. They can be the same particle orone can be the anti-particle of the other. Or, they can even be different particles whosemasses are believed to be approximately equal ( e.g., squarks or sleptons of the firsttwo generations). The visible particles do not need to be stable as long as we can seeall their decay products and reconstruct their 4-momenta. The event can involve moreparticles (such as hadronic ISR/FSR or parent particles such as squarks and gluinosdecaying within the gray blob on Fig. 2) as long as none of the additional particleslead to missing momentum. For example, the 4 leptons plus missing energy event fromthe decays of a pair of second neutralinos can be part of the longer decay chains fromsquark pair production, as occurs for the SPS1a chain decay.It is instructive to analyze the unknowns in this topology in a more detailed mannerthan given in the introduction. In particular, we can make a distinction between kinematic unknowns — those in which phase space is differential — and parametricunknowns — Lagrangian parameters or otherwise non-kinematic unknowns on whichthe cross section has some functional dependence. For instance in the Breit-Wignerpropagator [( q − M ) + M Γ / − , q is kinematic while M and Γ are parametric.Masses, including those of missing particles, are parametric unknowns (phase space– 9 – p/ E is not differential in them). Any function of an event’s missing 3-momenta andalready-known parameters is a kinematic unknown.Each event with the topology of Fig. 2 has eight kinematic unknowns: ~p N , ~p N ′ andthe initial state E and p z , where we are assuming the parameters m N and m N ′ are fixed.Total 4-momentum conservation reduces this to four kinematic unknowns. In the nar-row width approximation, the mass combinations constructed from a combination ofvisible momenta and invisible momenta (which we place in the class of kinematic un-knowns), such as m ≡ ( p + p ) , are equal to the corresponding parametric unknowns,such as m X , and we can trade them for their corresponding parameters. Therefore, inthe narrow width approximation, a single event is described by a volume in the sixdimensional parameter space { m Y , m Y ′ , m X , m X ′ , m N , m N ′ } .If the two chains are identical or mirrors of one another and if we use the narrowwidth approximation, we can impose 3 more relations, m Y = m Y ′ , m X = m X ′ and m N = m N ′ , which reduces the independent unknown parameters to three. As a re-sult, if we know the three masses m Y , m X , and m N then (up to discrete ambiguitiesassociated with multiple solutions to a quartic equation) we can solve for all the un-known momenta, using the measured visible momenta, and vice versa. The procedureis described in more detail in Appendix A.If the masses are not known, we must assume values for the three masses m Y , m X ,and m N . Given a fixed M = { m Y , m X , m N } choice, for each event we obtain a quarticequation (for the energy of one of the invisible particles) with coefficients dependingon the assumed masses, M , and visible momenta. It can have 0 to 4 real solutions forthe invisible energy, depending on the coefficients, and each solution fully determinesassociated 4-momenta for both invisible particles.Any solution with real and physically acceptable invisible 4-momenta correspondsto a choice for m Y , m X , and m N that is consistent with that particular event. Thepoints in M = { m Y , m X , m N } parameter space that yield real solutions are not discrete;instead, each event defines a region in the three-dimensional mass space correspondingto a volume of real solutions. The region in the mass space consistent with all events,the ‘allowed’ region, will shrink as we consider more and more events. However, evenfor many events the allowed region remains three-dimensional and does not shrink toa point. We need to find techniques that allow us to identify the correct mass pointgiven a volume in mass space consistent with a set of events.– 10 – . Idealized case: Perfect resolution, no combinatorics and nobackground In order to understand how the mass information is contained in the kinematics, westart with the ideal case in which all visible momenta are assumed to be measuredexactly and we associate each lepton with the correct chain and position in the chain( i.e. , we neglect resolution effects and combinatorics). For illustration, we have gen-erated a sample of 500 events of ˜ q L ˜ q L production, with each ˜ q L decaying according toFig. 1, with Y = Y ′ = e χ , X = X ′ = e µ R , N = N ′ = e χ , and 3, 4, 5, 6 all being µ ’sof various signs. We generated our events using SHERPA [20] versions 1.0.8 and 1.0.9and PYTHIA [21]. We generated the SUSY spectrum for the mass points consideredusing SPheno 2.2.3 [22]. Details regarding the spectrum, cross sections and branchingratios for this point (Point I) are given in Appendix B. For the moment, we only needto note the resulting masses: m Y = 246 . , m X = 128 . , m N = 85 . . (3.1)We stress however that our techniques are not specific to SUSY; we have just usedthe available tools for supersymmetric models. Thus, event rates employed are notnecessarily those predicted in the context of some particular SUSY model. Here, wesimply use a 500 event sample for illustration of the basic ideas.For simplicity, we have required all four leptons to be muons. We assume that themomenta of the 4 muons and the sum of the transverse momenta of the two neutralinosare known exactly. The only cuts we have applied on this sample are acceptance cutsfor the muons: p T > | η | < .
5. We do not consider the mass of thesquark, therefore information from the quarks is irrelevant except that the presenceof the quark jets typically boosts the system (Fig. 2) away from the z axis, an effectautomatically included in our analysis. In the following, we denote a set of masses as M = { m Y , m X , m N } and the correct set as M A .Each event defines a mass region in M space that yields real solutions for ~p N and ~p N ′ (for which we often employ the shorter phrase ‘real solutions’ or simply ‘solutions’).This region can be determined by scanning through the mass space. We then examinethe intersection of the mass regions from multiple events. This region must contain thecorrect masses, M A . The allowed mass region keeps shrinking when more and moreevents are included. One might hope to reach a small region near M A as long as enoughevents are included. However, this is not the case, as exemplified in Fig. 3. There, the3-dimensional allowed region in M -space is shown together with its projections on 2-dimensional planes. When producing Fig. 3, we discretize the mass space to 1 GeVgrids in all three directions. As already noted, we have used the correct assignments– 11 – m X m Y m N m X m Y m N m X m X m Y m N m Y m Figure 3:
Mass region (in GeV) that can solve all events for the input masses { m Y , m X , m N } = { . , . , . } GeV using 500 events. for the locations of the muons in the decay chains. Wrong assignments will add morecomplication; this will be discussed in Sec. 4. With correct assignments, and because ofour narrow-width and no-smearing assumptions, the correct masses M A will result inat least one real ~p N and ~p N ′ solution for all events and is included in the allowed region.In all three 2-dimensional projections, the entire allowed region is a strip with m Y and m X close to the correct values, but m N left undetermined except for an upper bound.A lower bound is sometimes present and can be caused by the presence of events inwhich the system (Fig. 2) has a large amount of transverse momentum. The upperbound for m N generally materializes using fewer events than does the lower bound. Byexamining the figures one can see that the upper bound for m N is actually close to thecorrect m N ; more generally, M A is located near the tip of the cigar shape of acceptablechoices in M -space.An intuitive understanding of why it is that the correct mass set M A is located atan end point can be garnered from Fig. 4. Any point in the mass space on the left-hand– 12 – igure 4: Map between mass space and kinematic space. The nominal masses, point A ,produces a kinematic region that coincides with the experimental region: K A = K exp . Apoint B inside the allowed mass region produces a larger kinematic region: K B ⊃ K exp . side of the figure is mapped into a region of the kinematic space on the right-hand side.By ‘kinematic space’ we mean the set of observed 3-momenta of the visible particles,3, 4, 5, and 6. Thus, the kinematic space has much higher dimensionality than themass space — the on-shell Y, X, N masses can be held fixed while changing the angles,magnitudes and so forth of the visible particles. Consequently, each point in mass spacedefines a volume in kinematic space. In analyzing data, the inverse mapping is to beenvisioned. Each point in the kinematic space corresponds to a specific momentumconfiguration of the visible particles, i.e. an event. A collection of many events willdefine a region in the kinematic space. In particular, the correct set of masses, point A in Fig. 4, produces a kinematic region K A that coincides with the experimental one, K A = K exp , as long as the number of experimental events is large enough so that allthe allowed region is populated. Any shift away from A will generally not allow one ormore kinematical observables associated with the visible particles to occupy a regionclose to the boundary of K exp ; i.e. such a shift will generally exclude a region of theactually observed kinematical space.A mass point other than M A produces a region different from K exp . If it does notcover the entire K exp , this means that some events will not have yielded real ~p N and ~p N ′ solutions and, therefore, the mass point does not appear in the final allowed massregion. On the other hand, there can be mass points which produce larger kinematicregions encompassing the entire K exp region. These mass points yield real solutions forall events and hence belong to the final allowed region. This kind of point is exemplifiedby point B in Fig. 4. If we shift such a point in the mass space by a small amount, M B → M ′ = M B + δ M , the resulting kinematic region still covers K exp . In this case,– 13 – ′ still yields real solutions for all events. Thus, point B , which produces a regionlarger than K exp , has the freedom to move in many directions because it lives inside theallowed region rather than on its boundary. On the other hand, the correct mass point A , which produces exactly K exp , has the least freedom to move. In short, locating thecorrect mass point M A can be viewed as a kind of generalization of the ‘edge’ methodwhich employs sharp edges in certain invariant mass combinations constructed fromthe visible momenta. Our method matches the whole boundary of the allowed regionin the high-dimensional kinematic space of the visible momenta.Of course, using the “tip” of the allowed mass region is not applicable in therealistic case where experimental resolutions and combinatorics are included, not tomention the possible presence of background events. In particular, some of the eventsgenerated after including these effects will be inconsistent ( i.e. not yield real solutionsfor p N and p N ′ ) with the correct mass set M A and so this point will not be contained inthe M volume obtained if all events are considered. We must find more sophisticatedmethods to identify the correct mass point. Nevertheless, understanding the idealizedcase provides useful guidance for understanding how to deal with the more complicatedrealistic situations.
4. Inclusion of combinatorics, finite resolutions and backgrounds
In this section we discuss the more realistic case with finite resolutions, combinatoricsand backgrounds. We first discuss the effects from finite resolutions and combinatoricsand later we will include the backgrounds. For the moment, we continue to employthe spectrum associated with the SUSY Point I, as specified in Appendix B, with { m Y , m X , m N } = { . , . , . } GeV.
Experimental effects related to smearing and combinatorics will deform or even killthe allowed mass region. In particular, since the correct mass point is located at theendpoint, it is most vulnerable to any mismeasurement. This can be seen in Fig. 5,which corresponds to 500 events for the same mass point as Fig. 3. The difference isthat we have: i) added smearing; ii) considered all possible combinatoric assignmentsfor the location of the muons in the two decay chains; and iii) included the finite widthsof the Y and X intermediate resonances. We smear muon momenta and missing p T using the low-luminosity options of the ATLAS fast simulation package ATLFAST asdescribed in Secs. 2.4 and 2.7 of [23]. Very roughly, this corresponds to approximatelyGaussian smearing of muon momentum with width ∼ /p T and of each componentof missing momentum p miss T with width ∼ . p miss T as we shall shortly review. Our approachis only sensitive to p miss T uncertainties because we do not look at the jets associated withthe chain decays prior to arriving at the e χ e χ pair. We only need the net transversemomentum of the e χ e χ pair as a whole, and we determine this in our procedure as P ℓ p ℓT + p miss T . Thus, in our analysis the errors from smearing derive entirely from theuncertainties in the lepton and missing momentum measurements. The fact that wedon’t need to look at individual jets is, we believe, and important advantage of ourapproach to determining the e χ , e ℓ and e χ masses. Of course, once these masses havebeen determined, the edge techniques, which fix mass differences very accurately, canbe used to extract the e g and e q masses.We summarize the missing energy procedure as described in Sec. 2.7 of [23] in abit more detail. The missing transverse energy E miss T is calculated by summing thetransverse momenta of identified isolated photons, electrons and muons, of jets, b -jetsand c -jets, of clusters not accepted as jets and of non-isolated muons not added toany jet cluster. Finally, the transverse energies deposited in cells not used for clusterreconstruction are also included in the total sum. Transverse energies deposited inunused cells are smeared with the same energy resolution function as for jets. From thecalculation of the total sum E obs T the missing transverse energy is obtained, E miss T = E obs T as well as the missing transverse momentum components, p miss x = − p obs x and p miss y = − p obs y .For combinatorics, we assume no charge misidentification. Then, there are 8 in-dependent possible combinatoric locations for one event, which can be reduced if onemuon pair is replaced by an electron pair. If any one of these 8 possibilities yields areal solution (after including smearing/resolution as described above), we include the M point in our accepted mass region.As regards the resonance widths, these have been computed within the context ofthe models we have considered, as detailed in Appendix B. In our Monte Carlo, themass of a given e χ or e ℓ resonance is generated according to a Breit Wigner form usingthe computed width. Although there will be some model dependence of the widthsin that they might differ between the SUSY models employed as compared to a little-Higgs model, the widths for these weakly interacting particles are all much smaller thandetector resolutions in both models ( e.g. of order a few hundred MeV in the SUSYmodels). This is again an advantage of our approach since we never need to know whereon the Breit-Wigner mass distribution of the e g and e q resonances a given event occurs.We only need the net transverse momentum of the e χ e χ system as determined from P ℓ p ℓT + p miss T . Also, for the moment we will focus on events with four µ ’s in the final– 15 –tate and so both sleptons in the two decay chains will be e µ ’s. When we come to theSPS1a mSUGRA point, we will discuss combining results for the e µ e µ , e e e e and e µ e e decaychannels. Even in this case, we analyze final states with definite lepton composition(4 µ , 4 e or 2 e µ ) separately and do not need to worry about whether the e µ is closelydegenerate with the e e (although it in fact is). If there is significant non-degeneracy,that would emerge from our analysis. However, to get final errors on the e ℓ mass as lowas ∼ e µ and e e must be assumed (and of course is predictedin the model). If in some other model, the e µ and e e are not degenerate, then errors onthese individual masses will be of order ∼ −
12 GeV, but errors on m e χ and m e χ will only be slightly larger than the ∼ and thereforebroaden the allowed region for low m N . On the other hand, the allowed region hasshrunk in the m N direction with the new upper bound corresponding to a much smallervalue. This can also be understood by using Fig. 4: some events near the boundary of K A can be resolution-smeared to a location outside of K A , which renders K exp largerthan K A . Thus the correct mass point A is removed from the allowed mass region.For point B which corresponds to a larger kinematic region, if the fluctuation is smallenough, K exp is still within K B and therefore does not disappear. Of course, if thesmearing is large, the entire allowed region can be eliminated. The effect from back-ground events, as considered in the next subsection, will be similar. Since backgroundevents are produced by some completely different processes there is no reason to expectthat multiple background events can be solved by the assumed topology with a givenchoice of M . Thus, background events tend to reduce the allowed region.From the above observation, one concludes that allowed mass region in generaldoes not exist and, even if it exists, we can not read directly from it the correct masses.Some other strategy must be employed. An obvious choice is to examine the number ofsolvable events for various given masses. We can not simply maximize the number ofsolvable events and take the corresponding masses as our estimate—such a procedurewould still favor low m N values. Instead, we choose to look for the mass location wherethe number of solvable events changes drastically. This kind of location is most easilyillustrated in one dimension. For example, in Fig. 6 a, we fix m Y and m X to the correct(input) values, and count the number of solvable events as a function of m N . (In thisfigure and the following discussion, we use bin size of 0.1 GeV). A sudden drop aroundthe correct m N is obvious. Similarly, in Figs. 6 b and 6 c we have fixed m Y and m N We define a ‘solved’ event to be an event such that the given { m Y , m X , m N } choices yield at leastone solution to the final quartic equation that leads to physically allowed values for ~p N and ~p N ′ . – 16 – m X m Y m N m X m Y m N m X m X m Y m N m Y m Figure 5:
The allowed mass region (in GeV) with smearing and wrong combinatorics. (GeV) N m nu m b er o f e v e n t s fit N (a) m fit N (a) m (GeV) X m nu m b er o f e v e n t s fit X (b) m fit X (b) m (GeV) Y m nu m b er o f e v e n t s fit Y (c) m fit Y (c) m Figure 6:
One-dimensional fits by fixing the other two masses at the correct values. ( m X and m N ) and also see clear “turning points” near the correct m X ( m Y ) mass. Topin down where the turning points are located, we fit Figs. 6 a and 6 c to two straightline segments and take the intersection point as the turning point.– 17 –e can not fix a priori two of the masses to the correct values since they areunknown. On the other hand, to search for the sharpest turning point directly in the3-dimensional space is numerically non-trivial. This observation motivates us to obtainthe masses from a series of one-dimensional fits. We start from some random set ofmasses and carry out a recursive series of one-dimensional fits to the number of solvedevents as a function of m N , m X or m Y holding { m Y , m X } , { m Y , m N } , or { m X , m N } fixed, respectively. Each such one dimensional fit gives us a sharp turning point thatis used to set an updated value for m N , m X or m Y , respectively. We use this newvalue in performing a fit for the next mass in the sequence in the next step. One mighthope that this procedure will converge to the correct mass values, but in practice, eventhough the procedure passes through the correct mass point, the fitted masses keepincreasing and the recursion does not stabilize at the correct mass point. However, aswe will see, there is a simple way to get the masses out of the fits.Having discussed the main ingredients of the method, we present a specific proce-dure for obtaining the masses. The procedure is applied to a data sample correspondingto 90 fb − at the LHC, using the event rates and branching ratios obtained for the SUSYPoint I as detailed in Appendix B, which, in particular, gives the same masses as thoseemployed in Sec. 3: { m Y , m X , m N } = { . , . , . } GeV. Taking into consider-ation the decay branching ratios, the number of events is roughly 2900. In order tomimic reality as much as possible, experimental resolutions and wrong combinatoricsare included. To reduce the SM background, we require that all muons are isolatedand pass the kinematic cuts: | η | µ < . , p T µ >
10 GeV , /p T >
50 GeV . (4.1)With these cuts, the four-muon SM background is negligible. The number of signalevents is reduced from 2900 to about 1900.The procedure comprises the following steps:1. Randomly select masses m Y > m X > m N that are below the correct masses (forexample, the current experimental limits).2. Plot the number of solved events, N evt , as a function of one of the 3 masses in therecursive order m N , m X , m Y with the other two masses fixed. In the case of m Y and m N , we fit N evt for the plot with two straight lines and adopt the mass valueat the intersection point as the updated mass. In the case of m X , the updatedmass is taken to be the mass at the peak of the N evt plot.A few intermediate one-dimensional fits are shown in Fig. 7.– 18 –. Each time after a fit to m N , record the number of events at the intersection(sometimes called the turning point) of the two straight lines, as exemplified inFig. 6 a. This event number at the turning point will in general be non-integer.4. Repeat steps 2 and 3. The number of events recorded in step 3 will in generalincrease at the beginning and then decrease after some steps, as seen in Fig. 8.Halt the recursive procedure when the number of (fitted) events has sufficientlypassed the maximum position.5. Fit Fig. 8 to a (quartic) polynomial and take the position where the polynomialis maximum as the estimated m N .6. Keep m N fixed at the value in step 5 and do a few one-dimensional fits for m Y and m X until they are stabilized. Take the final values as the estimates for m Y and m X . (GeV) N m nu m b er o f e v e n t s fit N (a) m fit N (a) m (GeV) X m nu m b er o f e v e n t s fit X (b) m fit X (b) m (GeV) Y m nu m b er o f e v e n t s fit Y (c) m fit Y (c) m Figure 7:
A few steps showing the migration of the one dimensional fits. The middle curvein each plot corresponds to masses close to the correct values.
A deeper understanding of our procedure can be gained by examining the graphicalrepresentation of the steps taken in the ( m Y , m N ) plane shown in Fig. 9. There, wedisplay contours of the number of (fitted) events after maximizing over possible m X choices. The contours are plotted at intervals of 75 events, beginning with a maximumvalue of 1975 events. As we go from 1975 to 1900 and then to 1825 events, we seethat the separation between the contours decreases sharply and that there is a ‘cliff’of falloff in the number of solved events beyond about 1825 events. It is the locationwhere this cliff is steepest that is close to the input masses, which are indicated bythe (red) star. The mass obtained by our recursive fitting procedure is indicated bythe (blue) cross. It is quite close to the actual steepest descent location. It is possible– 19 – N H GeV L Figure 8:
The final plot for determining m N . The position of the maximum of the fittedpolynomial is taken to be the estimation of m N . that use of the contour plot by visually picking the point of steepest descent mightalso yield an accurate mass determination comparable to or possibly even superiorto that obtained (and specified in detail below) using the recursive fitting procedure.Roughly, the steepest descent point corresponds to the point where the magnitudeof ~ ∇ in mass space is maximized. Unfortunately, even after some smoothing, thesecond derivative is quite ‘noisy’ and therefore not particularly useful in a local sense.The one-dimensional fits give us a quick and intuitive way to find this maximum, andthe associated recursive procedure has the advantage of being insensitive to statisticalfluctuations in the number of events at a single point. Of course, if one has the computerpower, probably the most accurate procedure would be to directly fit the 3-d N evt vs. { m Y , m X , m N } histogram. Fig. 9 is constructed from a 1-d projection of the 3-d space,and has therefore lost some information.Following the recursive fitting procedure, the final values for the masses are deter-mined to be { } GeV, which are all within a few GeV of the actualinput values, { } GeV. The procedure is empirical in the sense thatmany of the steps could be modified and improved. In particular, above we adoptedthe criterion that the correct masses maximize the number of events at the turningpoints in the m N fits, which is justified by Fig. 7 a. Instead, we might opt to maximizethe number of events in the m X fits shown in Fig. 7 b. One could also change the orderof fits in step 2 and change the fit function from straight lines to more complicatedfunctions, etc. We have tried several different strategies and they yield similar results.Finally, one could simulate the signal for a mass point and directly generate Fig. 7,changing the masses until we get the best possible fit to the data; but, this is very– 20 –
20 40 60 80 100 120 140200220240260280300 m N H GeV L m Y H GeV L *+ Figure 9:
Contours for the number of solved events in the m N ∼ m Y plane with 2000events. The number of events is the maximum value obtained after varying m X . Contoursare plotted at intervals of 75 events, beginning with a maximum value of 1975. The red staris the position for the correct masses and the blue cross is the position of the fitted masses.The green dots correspond to a set of one-dimensional fits. computationally intensive.The recursive procedure does not provide an easy way to evaluate the errors in themass determination. For this purpose, we generate 10 different data samples and applythe procedure to each sample. As above, each sample corresponds to 1900 experimentaldata points after cuts. Then, we estimate the errors of our method by examining thestatistical variations of the 10 samples. This yields central masses and rms errors of m Y = 252 . ± . , m X = 130 . ± . , m N = 86 . ± . . (4.2)– 21 –he statistical variations for the mass differences are much smaller: m Y − m X = 119 . ± . , m X − m N = 46 . ± . . (4.3)Compared with the correct values, M A = { . , . , . } GeV, we observe smallbiases in the mass determination, especially for the mass differences, which means thatour method has some “systematic errors”. (The biases will, of course, depend upon theparticular functions employed for the one dimensional fits — our choice of using straightlines is just the simplest.) One technique for determining the biases is to perform ouranalysis using Monte Carlo data. In particular, one could examine the plots of numberof ‘solved’ events vs. test mass as obtained from the data vs. those obtained from aMonte Carlo in which definite input masses (which are distinct from the test massesemployed during our recursive procedure) are kept fixed. One would then search forthose input masses for which the distributions of the solved event numbers from theMonte Carlo match those from the data. Knowing the underlying Monte Carlo massesas compared to the extracted masses would allow us to subtract the differences, therebyremoving the biases. This procedure would not appreciably change the errors quotedabove. We believe that the biases are mainly a function of the underlying masses andbroad kinematic event features. However, there may be some weak dependence of thebiases on the actual model being employed. Within the context of a given, e.g.
SUSY,model, the bias can be quite accurately determined.In the above error estimation, we have neglected the uncertainties coming fromvarying the choice of the starting point in mass space used to initiate the recursivesequence of fits. This may introduce an error for the absolute mass scale of order thestep size around the correct masses. For the masses chosen, it is about 1 GeV andmuch smaller than the uncertainties from varying data samples.The reader may be surprised at the small size of the errors quoted above given thatthe error in the measurement of the missing momentum of any one event is typically oforder 5 GeV or larger. The explanation is similar to that associated with understandingthe small errors for the edge locations in the edge approach. In the edge approach, thelocation of the edge for some mass variable m vis is obtained by fitting data obtained atseveral m vis values. Each such data point has many contributing events and the averagevalue will obviously have much smaller error than the value for any one contributingevent. The fit to the edge will further reduce sensitivity to individual events. In our ap-proach, the edge in the distribution of N evt as a function of one of the trial masses ( m N , m X or m Y ) will similarly be an average over many events and the uncertainty of thelocation of this edge will be much smaller than the uncertainties in the measurementsof the lepton momenta and missing momentum of any one event.– 22 – (GeV) N m nu m b er o f e v e n t s fit N (a) m fit N (a) m (GeV) X m nu m b er o f e v e n t s fit X (b) m fit X (b) m (GeV) Y m nu m b er o f e v e n t s fit Y (c) m fit Y (c) m Figure 10:
Fits with 1900 signal events (after cuts) and an equal number of backgroundevents. Separate numbers of signal (blue) and background (red) events are also shown.
For the point we have chosen with a 4 muon + missing energy final state, the backgroundis negligible. We examined backgrounds arising from
ZZZ , ZW W , t ¯ t , t ¯ tZ , t ¯ tb ¯ b , and b ¯ bb ¯ b . Muons from bottom and charm decays are never very hard nor isolated, and canbe easily separated from the signal with basic isolation criteria. Tri-boson productionsimply has tiny cross sections, especially after requiring all-leptonic decays.Thus, we must ‘artificially’ introduce background in order to see what its effectmight be on our procedures. For this purpose, we generate t ¯ t events, where the W ’sdecay to muons. We require that the b quarks decay to muons, but do not require themto be isolated. In many ways, this is a near-worst case background since it has a similartopology aside from the final b → µ + . . . decays. However, the missing neutrinos implythat the missing momentum may be significantly different. As noted, this is not arealistic background as it could be removed by simple isolation cuts on the muons.Adding a number of background events equal to the number of signal events, i.e. ackground/Signal ( G e V ) N m Figure 11: m N determination with different background-signal ratio. The dashed horizontalline corresponds to the correct m N . the same 10 sets of signal events as in the previous subsection, but varied the number ofbackground events according to the ratio B ( ackground ) /S ( ignal ) = 0 , . , . , . , B/S ≥
1, the maximum in the m N determination is obscured or even lost and we startto get random results. For B/S < ∼ .
2, we are close to the B = 0 results.It is important to emphasize that the above analysis is pessimistic in that it assumesthat we do not understand the nature/source of the background events. One procedurethat could significantly improve on uncertainties associated with the background wouldbe to Monte Carlo the background or use extrapolations of measured backgrounds ( e.g. those as measured before the cuts that make the signal a dominant as compared to asmall component of the observed events) and then apply our recursive procedure tothe known background and at each stage subtract off the background contribution toa given plot of the number of events vs. m N , m Y or m X . After such subtraction,the recursive procedure will yield essentially identical results to that obtained in theabsence of background unless the background itself is not smooth in the vicinity of the‘turning’ points.The importance of finding cuts that both select a given topology and minimizebackground is clear. If it should happen that the we assume the wrong topology forthe events retained, then our analysis itself is likely to make this clear. Indeed, eventswith the “wrong” topology would almost certainly yield a smooth distribution in plotsof retained event number vs. any one of the masses of the resonances envisioned aspart of the wrong topology. It is only when the correct topology is employed that sharp– 24 –teps will be apparent in all the event number vs. resonance mass plots.Another important situation to consider is that in which it is impossible to finda set of cuts that isolates just one type of decay topology, so that there are severalsignal processes contributing after a given set of cuts. However, it is quite easy to findsituations where there are different signal processes yielding very similar final decaytopologies, all of which would be passing through our analysis. One must then look foradditional tricks in order to isolate the events of interest. In some cases, this is possibleon a statistical, but not event-by-event basis. The SPS1a SUSY point provides aninteresting example that we will consider shortly.
5. Other processes and mass points
Our method is generic for the topology in Fig. 2, and in particular is not restrictedto the SUSY process we have considered so far. The statistical variations and biasesprobably do depend to some extent on the process. For example, if the visible particles5 (6) and 3 (4) are of different species, the number of wrong combinatorics will bereduced and we would expect a better determination of the masses. On the otherhand, if one or more of the visible particles are jets, the experimental resolution andtherefore the statistical error will be worse than in the 4-lepton case.
The errors in the mass determination also depend on the mass point, especially thetwo mass differences, ∆ m Y X = m Y − m X and ∆ m XN = m X − m N . In Fig. 12, a setof one-dimensional fits are shown for mass point M = { . , . , . } GeV (whichwe label as Point II). We will assume 2000 events after cuts, very similar to the 1900events remaining after cuts for Point I. Point II differs from Point I in that for Point II∆ m Y X < ∆ m XN , while for Point I ∆ m Y X > ∆ m XN . The double peak structure in thePoint II m X fit (Fig. 12 b) is evident. The curve to the right of the turning point inFig. 12 c is also “bumpy” compared with Fig. 6 c. These features are induced by wrongcombinatorics. In the process we consider, all visible particles are muons, so they couldbe misidentified as one another and still yield solutions. Roughly speaking, ∆ m Y X and∆ m XN determine the momentum of the particles 5 (6) and 3 (4) in Fig. 2, respectively.Therefore, the chance that a wrong combinatoric yields solutions is enhanced when, forexample, ∆ m Y X is close to the correct value of ∆ m XN . When the two mass differencesare close to each other, the turning point is smeared. Nonetheless, with 2000 eventsafter cuts, the errors obtained for the masses are similar to those obtained for Point I.– 25 – (GeV) N m nu m b er o f e v e n t s fit N (a) m fit N (a) m (GeV) X m nu m b er o f e v e n t s fit X (b) m fit X (b) m (GeV) Y m nu m b er o f e v e n t s fit Y (c) m fit Y (c) m Figure 12:
One-dimensional fits for mass point { } GeV.
Another interesting case is that of m N being zero or very small. As for the previouscase, we have arbitrarily used a sample of 2000 events after cuts. Because the one-dimensional fits proceed in the direction of increasing masses, we will miss the correctmasses even when we start from m N = 0. Since we always fit the m N plot to twoline segments, it will never yield m N = 0. However, we can distinguish this case bylooking at the peak number of events in the m X fits. For example, considering masspoint { } GeV (which we call Point III), we start from m X = 80 . m N = 0 . m Y → m X → m N . The first few fits yield { . , . , } → { . , . , } → { . , . , . } → · · · After only two steps, the Y and X masses are adjusted close to the correct values.Examining the peak number of events in the m X fits (Fig. 13), we find that the numberis maximized in the first m X fit. This is clearly different from previous cases where thenumber of events always increases for the first few m X fits (see Fig. 7 b), and indicatesthat m N is near zero. It is desirable to compare directly to the results obtained by others for the SPS1a SUSYparameter point. We perform the analysis using the same 4 µ e χ e χ final state that wehave been considering. For the usual SPS1a mSUGRA inputs (see Appendix B) themasses for Y = e χ , X = e µ R and N = e χ (from ISAJET 7.75) are 180 . . . e χ is e χ → τ e τ . The branching ratio– 26 –
05 110 115 120 125 130m X H GeV L Figure 13:
Peak number of events in m X fits for mass point { } GeV. for e χ → µ e µ R is such as to leave only about 1200 events in the 4 µ e χ e χ final state after L = 300 fb − of accumulated luminosity. Cuts reduce the number of events furtherto about 425. This is too few for our technique to be as successful as for the earlierconsidered cases. After including combinatorics and resolution we obtain: m Y = 188 ±
12 GeV , m X = 151 ±
14 GeV , m N = 100 ±
13 GeV . (5.1)In Fig. 14, we give an SPS1a plot analogous to Fig. 8. Errors are determined bygenerating many such plots for different samples of 425 events. Note the vertical scale.The change in the number of events as one varies m N is quite small for small eventsamples and this is what leads to the larger errors in this case.In principle, we must also take into account the fact that the e χ → τ e τ decaysprovide a background to the purely muonic final state. The dominant decay e χ → τ e τ has a branching ratio that is a factor of ∼
14 times larger than that for e χ → ℓ e ℓ R . The e τ will then decay to τ e χ . If both τ ’s then decay to µνν , then e χ → τ e τ eventswill be likely to contaminate the e χ → µ e µ R sample. Fortunately, this contaminationis not huge. The relevant effective branching ratios for e χ e χ → τ e τ τ e τ → µ e χ e χ and e χ e χ → τ e τ µ e µ R → µ e χ e χ are (cid:20) BR ( e χ → τ e τ → τ τ e χ → µµ ν e χ ) BR ( e χ → µ e µ R → µµ e χ ) (cid:21) ∼ (cid:2) × (0 . (cid:3) ∼ .
18 (5.2)and 2 (cid:20) BR ( e χ → τ e τ → τ τ e χ → µµ ν e χ ) BR ( e χ → µ e µ R → µµ e χ ) (cid:21) ∼ . , (5.3) This is, of course, due to the fact that e χ prefers to couple to left-handed slepton components,which are significant for the e τ . – 27 –
00 120 140 m N H GeV L Figure 14:
Fitted number of events at the turning point as a function of m N for the fitsfor the SPS1a case. respectively. The contamination levels from these backgrounds are further reduced byfactors of ∼ e χ e χ → τ e τ τ e τ final state and by ∼ e χ e χ → τ e τ µ e µ R finalstate after imposing the simple cuts of Eq. (4.1) (due to the softer nature of the µ ’scoming from the τ decays), implying contamination at about the 3 .
6% and 40% levels,respectively. Clearly, it is important to reduce this level of contamination given that m e τ is smaller than m e ℓ R by about 15 GeV and so, to the extent that events containing e χ → τ e τ decays remain in our sample, they might contribute additional structuresto our plots of number of events vs. mass. This reduction can be accomplished on astatistical basis using a further trick analogous to that discussed (but not, we believe,actually employed) in Ref. [10]. They note that the decay sequences e χ → µ − e + e χ and e χ → µ + e − e χ are unique to e χ → τ e τ . Thus, when considering just the one-sided decaychain situation one can subtract off (on a statistical basis, i.e. after many events) the e χ → τ e τ background by N ( e χ → µ e µ R → µµ e χ ) = N ( e χ → µµ e χ ) − N ( e χ → µe e χ ) , (5.4)where N is the number of ‘solved’ events as a function of one of the unknown on-shellmasses. In our case, where both chain decays are considered simultaneously, we have4 µ e χ e χ states arising from e χ e χ → τ ± e τ ∓ τ ± e τ ∓ decays and e χ e χ → τ ± e τ ∓ µ ± e µ ∓ R decaysin addition those from our e χ e χ → µ ± e µ ∓ R µ ± e µ ∓ R signal. To subtract off the backgroundSUSY events from the former two decay chains, we can employ the following subtraction– 28 –where the initial e χ e χ and final e χ e χ are implicit) N ( µ ± e µ ∓ R µ ± e µ ∓ R → µ + µ − µ + µ − ) = N ( µ + µ − µ + µ − ) − N ( e + µ − µ + µ − ) + N ( e + e + µ − µ − )= N ( µ + µ − µ + µ − ) − h N ( e + µ − µ + µ − ) + N ( e − µ + µ − µ + )+ N ( µ + e − e + e − ) + N ( µ − e + e − e + ) i + 12 (cid:2) N ( e + e + µ − µ − ) + N ( e − e − µ + µ + ) (cid:3) . (5.5)where the latter form is likely to have smaller statistical error. An experimental indi-cator of the sensitivity to statistics could be gained by examining the different possibleequivalent subtractions, of which only two are indicated above. If one were happy toignore the 3 .
6% contamination from e χ e χ → τ ± e τ ∓ τ ± e τ ∓ decays one could then usea simpler form to subtract off the dominant contamination from e χ e χ → τ ± e τ ∓ µ ± e µ ∓ R decays, namely N ( µ ± e µ ∓ R µ ± e µ ∓ R → µ + µ − µ + µ − ) ∼ N ( µ + µ − µ + µ − ) − N ( e + µ − µ + µ − ) ∼ N ( µ + µ − µ + µ − ) − (cid:2) N ( e + µ − µ + µ − ) + N ( e − µ + µ − µ + ) (cid:3) . (5.6)We have not actually performed this kind of analysis using any of the possible subtrac-tions to see how well we do, but we expect that the net background contamination willbe equivalent to B/S < ∼ .
1, a level for which our techniques work very well and theerrors quoted earlier for the SPS1a point using the 4 µ final state will not be increasedby very much.Of course, the same analysis as performed for the 4 µ final state can also be used forthe 2 µ e and 4 e final states. Combinatorics are less of an issue for the 2 µ e final statethan for the 4 µ and 4 e final states. The 4 e -channel event number is essentially the sameas the 4 µ -channel event number and the 2 µ e -channel event number is roughly twiceas large. Combining all channels (as appropriate if the e e has mass very close to the e µ , as predicted by the model), one obtains a total of about 1700 events and ∼ m e χ , m e ℓ and m e χ determinations.Of course, as the observant reader may have noticed, to get 1700 events requiresrunning at high luminosity, whereas the simulations referenced so far have employed the p miss T resolution expected at low-luminosity running. The p miss T low-luminosity resolutionis about 5 . . µ channel for the same cuts(and L = 300 fb − ). The resulting mass determinations obtained using the 10 inde-pendent Monte Carlo experiments are m Y = 187 ±
10 GeV , m X = 151 ±
10 GeV , m N = 98 ± , (5.7)where the errors are, as always, rms errors. In short, we get even smaller errors thanfor low-luminosity running. After combining the 4 µ , 4 e and 2 µ e channels assuming e e – e µ degeneracy our mass determination errors are slightly above 4 GeV.
6. Summary and Discussion
For any theory that simultaneously provides a solution of the hierarchy problem and adark matter particle as a result of a symmetry guaranteeing its stability, implying pairproduction of its heavier partners, the relevant LHC events will be ones in which theheavier partners are pair produced, with each chain decaying down to largely visibleSM particles and the dark matter particle, which we denote by N . In many interestingcases, towards the end of each such chain 2 visible SM particles emerge along withthe invisible dark matter particle, e.g. Y → µX → µµN , with the preceding partsof the decay chains giving rise to jets. In other cases, two Y particles are directlyproduced and initiate 2 such chain decays. In this paper, we have developed a highlyeffective technique for using the kinematic information in a typical event containing two Y → µX → µµN decay chains to determine not just the mass differences in the chaindecay, but also the absolute mass scale, using only the measured µ momenta and overallvisible and missing transverse momenta. Since we use purely kinematic information, ourmass determination does not require any assumptions regarding particle spins, shapesof distributions, cross section and so forth. Further, our procedure works whether ornot we know the topology of each of the chains that precedes the Y → µX → µµN stage. This can be a big advantage. For example, in the supersymmetry context thisallows us to combine e g and e q initiated chains.In our study, we have included resolution smearing for muon momenta and miss-ing momentum as incorporated in the ATLFAST simulation program. We have alsoincluded full combinatorics appropriate to the jets + 4 µN N final state. Assuming of– 30 –rder 2000 events after cuts and ATLFAST resolutions appropriate to low-luminosityrunning, we have found statistical errors of order 4 GeV for the individual Y , X and N masses, assuming a reasonable background to signal ratio, B/S < ∼ .
5. There is alsoa small systematic bias in the masses extracted. However, this bias can be removedusing Monte Carlo simulations once the masses are fairly well known. The appropriateprocedure is described in Sec. 4.1. We have not yet performed the associated highlycomputer intensive procedure, but believe that the systematic biases can be reducedbelow 1 GeV (a residual that we think might arise from possible model dependence ofthe kinematic distributions).As a particular point of comparison with the many earlier studies that use the mass-edge technique, we have examined the standard SPS1a point. Following our procedurewe are left with about 1920 events (averaging over 10 Monte Carlo “experiments”) inthe jets + 4 µ , jets + 2 e + 2 µ and jets + 4 e channels after cuts assuming an integratedluminosity of 300 fb − and employing resolutions appropriate to high-luminosity run-ning. The errors on m e χ , m e ℓ and m e χ are all between 4 GeV and 5 GeV if e µ and e e massdegeneracy is assumed. The previous mass-edge studies make this same assumptionand employ all the final SM particles of the full e g → b e b → bb e χ → bbℓ e ℓ → bbℓℓ e χ decaychain but examine only one chain at a time. Only one of these mass-edge studies claimsan accuracy ( ∼ ± m e χ , m e ℓ and m e χ ) for the same channels and integratedluminosity that is competitive with the small error we obtain.By comparing the SPS1a results obtained for high-luminosity resolutions to thosefor this same point using low-luminosity resolutions (as summarized in the previoussection) we found the important result that the accuracy of our mass determinations wasvery little influenced by whether or not we employed low- or high-luminosity resolutionfor p miss T , the latter being essentially twice the former. That our ability to locate the“edge” in a plot of the number of reconstructed events, N evt , as a function of thetest value of, say, m e χ , is not noticeably affected by a factor of two deterioration inresolution for p miss T is a sign of the robustness of our approach.Accuracies of order 4 − e.g. the coupling constantunification scale in SUSY) where they might follow a meaningful pattern that woulddetermine the more fundamental structure of the new physics theory. Further, anaccuracy of order 4 − − W simultaneously in the t ¯ t di-lepton decay topology. Or, given that the W mass is already quite well-known,they could impose this additional constraint in our context and get an excellent t massdetermination.The heart of our technique is the fact that by considering both decay chains in atypical LHC event together, a choice for the chain decay masses M = { m Y , m X , m N } (see Fig. 2) in combination with the measured momenta of the 4 visible and measurableSM particles emitted in the two chains implies a discrete (sometimes even unique) setof three momenta for the two final state N ’s. (One is solving a quartic equation.)Conversely, if we have already used our procedure to determine to good precision the M = { m Y , m X , m N } masses, we can invert the process. For each event, we can inputthe known masses and obtain a set of discrete choices for the momenta, ~p N and ~p N ′ , ofthe final invisible particles. For each discrete choice, the 4-momenta of all particles inthe decay chains are then determined. These 4-momenta can then be input to a givenmodel (with definite spins for the Y, X, N and definite decay correlations and so forth).One can then test the experimental distributions ( e.g. of correlation angles, of massesconstructed from the visible SM particles, and so forth) against predictions obtainedfor the model using a Monte Carlo. Presumably, this will provide strong discriminationbetween different models that have the same already-determined chain decay masses.The only question is to what extent the possibility of more than one discrete solutionfor each event will confuse the distributions obtained from the Monte Carlo.Conversely, it is clear that determining the spins of all the particles in a chain ofdecays can be difficult without a relatively precise purely-kinematic determination ofthe masses . In particular, we expect that angular correlations and the like (obtainedfrom Monte Carlos that assume a particular model including spins) will be stronglyinfluenced by the masses. Confusion between two different models with differing spinsand masses can be anticipated in the absence of an independent purely-kinematicaldetermination of the masses.Overall, we claim that our techniques provide some powerful new tools for doingprecision physics at the LHC in an environment where new physics events containinvisible particles of unknown mass. We hope the experimental community will pursuethe approaches we have analyzed. We do not anticipate that fully realistic simulations– 32 –ill lead to significantly larger errors for new particle masses than those we have found,but it is clearly important to verify that this is the case.
Acknowledgments
This work was supported in part by U.S. Department of Energy grant No. DE-FG03-91ER40674. JFG and HCC thank the Aspen Center for Physics where a portion ofthis work was performed.
AppendicesA. Solution Procedure
To determine whether a given event with the topology of Fig. 2 is consistent with agiven mass hypothesis, we proceed as follows. We envision the process of pp → (135) + (246), followed by (135) → → → → . . . ) are to be thoughtof as single on-shell particles: in the notation of Fig. 2, (135) = Y , (246) = Y ′ ,(13) = X , (24) = X ′ , 1 = N and 2 = N ′ .) We will be assuming input values for m , m , m , m , m and m , assuming m = m , m = m and m = m . Thecross section takes the form dσ = 12 s (2 π ) Z dx dx |M| f ( x ) f ( x ) d ~p E d ~p E d ~p E d ~p E d ~p E d ~p E × δ [ x p A + x p B − ( p + p + p + p + p + p )] (A.1)We first convert dx dx = 2 s dE tot dp ztot , (A.2)– 33 –ntroduce on-shell masses for the intermediate particles, and introduce on-shell δ functions for the invisible particles 1 and 2 to yield dσ = 14(2 π ) Z dE tot dp ztot dm dm dm dm |M| f ( x ) f ( x ) d ~p E d ~p E d ~p E d ~p E × d p δ ( p − m ) d p δ ( p − m ) × δ [ x p A + x p B − ( p + p + p + p + p + p )] × δ [( p + p + p ) − m ] δ [( p + p + p ) − m ] × δ [( p + p ) − m ] δ [( p + p ) − m ]= 14(2 π ) Z dp ztot dm dm dm dm |M| f ( x ) f ( x ) d ~p E d ~p E d ~p E d ~p E × d p δ ( p − m ) d p δ ( p − m ) × δ [( p + p + p ) − m ] δ [( p + p + p ) − m ] × δ [( p + p ) − m ] δ [( p + p ) − m ] (A.3)where in the last step we eliminated d ~p using the 3-momentum conservation part ofthe δ function and eliminated E tot using the energy part of the δ function. For fixedvalues of the unknown masses, we end up with the 4 unknowns of p ztot and ~p , to besolved for using the 4 on-shell δ functions. We will now define p vis ≡ p + p + p + p . (A.4)Assuming no transverse momentum for the Y + Y ′ = 1 + 2 + 3 + 4 + 5 + 6 system, wethen have p · p = E E − p z p z − p y p y − p x p x p · p = E E − ( p ztot − p zvis − p z ) p z − ( − p yvis − p y ) p y − ( − p xvis − p x ) p x p · p = E E − p z p z − p y p y − p x p x p · p = E E − ( p ztot − p zvis − p z ) p z − ( − p yvis − p y ) p y − ( − p xvis − p x ) p x . (A.5)(Transverse momentum for the Y + Y ′ system can, and must, be included in theobvious way. We compute it as the negative of the sum of the observed momenta ofparticles 3, 4, 5 and 6 and the missing momentum.) We next combine the last two δ functions and consider the requirement (again, recall that we are assuming someinput mass values for the intermediate on-shell particle masses)2 p · p − p · p + ∆ b ≡ G = 0 , (A.6)– 34 –here ∆ b ≡ m + m − m + m − m − m . (A.7)Similarly we combine the 135 and 246 δ functions to obtain2 p · p − p · p + ∆ b ≡ G = 0 , (A.8)where ∆ b ≡ m + m − m + m − m − m + 2 p · p − p · p . (A.9)Of course, m and m are measured experimentally (and are typically small unlessone is a W or Z ), and 2 p · p and 2 p · p are also computable from the experimentalevent. Further, we are assuming input values for m , m , m and m . The aboveis a convenient organization, since ∆ b = 0 and ∆ b reduces to just momenta dotproducts when the two decay chains are identical.We now implement directly the m and m δ functions. m − m − m − p · p ≡ ∆ − p · p ≡ G = 0 , (A.10)and m − m − m − p · p − p · p ≡ ∆ − p · p ≡ G = 0 , (A.11)where again 2 p · p is determined experimentally and the masses are being input.We now solve these 4 equations for the 4 unknowns of p ztot , p z , p y , and p x . We writethe solutions in the form: p x = c xe E + c xe E + c x p y = c ye E + c ye E + c y p z = c ze E + c ze E + c z p ztot = c zte E + c zte E + c zt (A.12)where the c ’s above are somewhat complicated functions of the masses, energies andmomenta of the visible particles, 3, 4, 5, and 6. The Jacobian for the variable change p x , p y , p z , p ztot → G , G , G , G is easily computed as J = 16 (cid:2) − p z p z p y p x + p y p z p z p x + p z p z p x p y − p x p z p z p y − p z p y p x p z + p z p x p y p z − p y p x p z p z + p x p y p z p z (cid:3) (A.13)It is a function only of observed momenta. For the next stage, we combine theexpressions of Eq. (A.12) with p x = − p xvis − p x , p y = − p yvis − p y , p z = p ztot − p zvis − p z , (A.14)– 35 –nd solve the equations for the on-shell δ functions for p and p (the invisibleparticles) 0 = E − ( p x ) − ( p y ) − ( p z ) − m (A.15)0 = E − ( p x ) − ( p y ) − ( p z ) − m (A.16)for E and E . For convenience, we rewrite Eqs. (A.15) and (A.16) in the respectiveforms: a E + a E E + a E + a E + a E + a ≡ F A = 0 (A.17) b E + b E E + b E + b E + b E + b ≡ F B = 0 (A.18)where the a ij , b ij , a i , b i as well as a and b are functions of the c ’s, m , m and thecomponents of ~p vis . We now take F A − a b × F B = 0 (A.19)and solve the resulting linear equation for E to obtain E = a b − a b − a b E + a b e − a b E + a b E − a b + a b + a b E − a b E (A.20)We now substitute this result into Eq. (A.17) to obtain the final quartic equation for E of form AE + BE + CE + DE + E = 0 , (A.21)where A , B , C , D and E are functions of the a ij , b ij , a i , b i , a and b . We then employa standard computer subroutine for obtaining the 4 roots of this quartic equation.For typical input visible momenta, some of the roots will be acceptable real solutionsand some will be imaginary. We retain all real solutions. (The Jacobian for the F A , F B → E , E transformation is easily computed.) Once real values for E and E are obtained these can be substituted into Eq. (A.12) to determine the 3-vectorcomponents of p and the z component of p tot . The components of p are thenobtained by momentum conservation. At this point, the invisible 4-momenta are fullydetermined and could potentially be employed in a model matrix element. B. SUSY points
In this appendix, we give details regarding the SUSY points simulated.– 36 – .1 Point I
We input low-scale parameters of µ = +300 GeV , tan β = 10 , ( f M , f M , f M ) = (90 , , e m (1 , , L = e m (3) E = 1000 GeV , e m (1 , E = 120 GeV e m (1 , Q = 400 GeV , e m (1 , U,D = 300 GeV , e m (3) Q = e m (3) U,D = 1000 GeV (B.1)where the e m ’s are the soft slepton and squark masses, and the f M ’s are the gauginomasses. L and Q refer to the slepton and squark SU (2) W doublets and E , U and D refer to the slepton and squark singlets. Superscripts give the generations. The decaychain of interest is e q L → q e χ e χ → µ e µ R , e µ R → µ e χ . (B.2)Using the input soft parameters as specified above and SPheno 2.2.3 [22], thesparticle masses of relevance for our discussion are (all in GeV): m e g ∼ , m e d L , e s L ∼ , m e u L , e c L ∼ m e χ ∼ . , m e µ R ∼ . , m e χ ∼ . σ pp → X q,q ′ = u,d,c,s e q L e q ′ L + X q,q ′ = u,d,c,s e q L e q ′ L + X q,q ′ = u,d,c,s e q L e q ′ L ! ∼ . × fb , (B.4)coming from all sources including gg fusion, u L u L fusion, etc. The branching ratiosrelevant to the particular decay chain we examine are BR ( e q L → q e χ ) ∼ .
27 ( q = u, d, c, s ) BR ( e χ → e µ ± R µ ∓ ) ∼ . BR ( e µ ± R → µ ± e χ ) = 1 . (B.5)The net effective branching ratio for the double decay chain is BR ( e q L e q L → µ e χ e χ ) ∼ (0 . × (0 . ∼ . × − (B.6)for any one e q L choice. The effective cross section for the 4 µ e χ e χ final state is then σ (4 µ e χ e χ ) ∼ . × fb × . × − ∼ . . (B.7)For an integrated luminosity of L = 90 fb − , this gives us 2900 4 µ e χ e χ events beforeany cuts are applied. After cuts, we are left with about 1900 events.– 37 – .2 Point II Point II is defined by the following input low-scale SUSY parameters: µ = +300 GeV , tan β = 10 , ( f M , f M , f M ) = (90 , , e m (1 , , L = e m (3) E = 1000 GeV , e m (1 , E = 140 GeV e m (1 , Q = 400 GeV , e m (1 , U,D = 300 GeV , e m (3) Q = e m (3) U,D = 1000 GeV . (B.8)Using SPheno 2.2.3 [22], the relevant chain-decay masses are { m Y = m e χ , m X = m e µ R , m N = m e χ } = { . , . , . } GeV . (B.9)However, we do not employ the cross sections and branching ratios predicted by theseparameters. Instead, we assume 2000 available experimental points after cuts, close tothe 1900 left after cuts in the case of Point I. This allows us to see how errors changein the case where m Y − m X is much smaller than in the case of Point I. B.3 Point III
The masses used in this case are obtained from PYTHIA 1.0.8 [21] using the low-scaleparameters µ = +3000 GeV , tan β = 10 , ( f M , f M , f M ) = (0 . , , e m (1 , , L = e m (3) E = 1000 GeV , e m (1 , E = 100 GeV e m (1 , Q = 400 GeV , e m (1 , U,D = 300 GeV , e m (3) Q = e m (3) U,D = 1000 GeV (B.10)yielding { m Y = m e χ , m X = m e µ R , m N = m e χ } = { . , . , . } GeV . (B.11)Again, we assume 2000 available experimental points after cuts, close to the 1900events after cuts obtained for Point I. B.4 Point IV: SPS1a
For the SPS1a point, we use the GUT-scale mSUGRA inputs of m / = 250 GeV , m = 100 GeV , A = −
100 GeV , tan β = 10 , µ > . (B.12)From ISAJET 7.75, the spectrum is calculated as m e g ∼
608 GeV , m e d L , e s L ∼
571 GeV , m e u L , e c L ∼
565 GeV m e χ ∼ . , m e µ R ∼ . , m e χ ∼ . m e τ ∼ . e χ e χ production is about1 pb. The relevant branching ratios are: BR ( e χ → e µ ± R µ ∓ ) ∼ . , BR ( e µ ± R → µ ± e χ ) = 1 . (B.14)For L = 300 fb − , this gives 1200 events before any cuts. After cuts, we are left withabout 425 events. Errors for the masses given in the text are based upon the latter. References [1] T. Appelquist, H. C. Cheng and B. A. Dobrescu, “Bounds on universal extradimensions,” Phys. Rev. D , 035002 (2001) [arXiv:hep-ph/0012100].[2] H. C. Cheng, D. E. Kaplan, M. Schmaltz and W. Skiba, “Deconstructing gauginomediation,” Phys. Lett. B , 395 (2001) [arXiv:hep-ph/0106098]; “Bosonicsupersymmetry? Getting fooled at the LHC,” Phys. Rev. D , 056006 (2002)[arXiv:hep-ph/0205314].[3] H. C. Cheng and I. Low, “TeV symmetry and the little hierarchy problem,” JHEP , 051 (2003) [arXiv:hep-ph/0308199]; “Little hierarchy, little Higgses, and a littlesymmetry,” JHEP , 061 (2004) [arXiv:hep-ph/0405243].[4] K. Agashe and G. Servant, “Warped unification, proton stability and dark matter,”Phys. Rev. Lett. , 231805 (2004) [arXiv:hep-ph/0403143].[5] E. A. Baltz, M. Battaglia, M. E. Peskin and T. Wizansky, Phys. Rev. D , 103521(2006) [arXiv:hep-ph/0602187].[6] I. Hinchliffe, F. E. Paige, M. D. Shapiro, J. Soderqvist and W. Yao, “Precision SUSYmeasurements at LHC,” Phys. Rev. D , 5520 (1997) [arXiv:hep-ph/9610544].[7] C. G. Lester and D. J. Summers, “Measuring masses of semi-invisibly decayingparticles pair produced at hadron colliders,” Phys. Lett. B , 99 (1999)[arXiv:hep-ph/9906349]; A. Barr, C. Lester and P. Stephens, “m(T2): The truthbehind the glamour,” J. Phys. G , 2343 (2003) [arXiv:hep-ph/0304226].[8] H. Bachacou, I. Hinchliffe and F. E. Paige, “Measurements of masses in SUGRAmodels at LHC,” Phys. Rev. D , 015009 (2000) [arXiv:hep-ph/9907518].[9] B. C. Allanach, C. G. Lester, M. A. Parker and B. R. Webber, “Measuring sparticlemasses in non-universal string inspired models at the LHC,” JHEP , 004 (2000)[arXiv:hep-ph/0007009]. See also C. G. Lester, http://cdsweb.cern.ch/search.py?sysno=002420651CER . – 39 –
10] B. K. Gjelsten, D. J. Miller and P. Osland, “Measurement of SUSY masses via cascadedecays for SPS1a,” JHEP , 003 (2004) [arXiv:hep-ph/0410303].[11] G. Weiglein et al. [LHC/LC Study Group], Phys. Rept. , 47 (2006)[arXiv:hep-ph/0410364], Sections 5.1 and 5.2.[12] K. Kawagoe, M. M. Nojiri and G. Polesello, “A new SUSY mass reconstruction methodat the CERN LHC,” Phys. Rev. D , 035008 (2005) [arXiv:hep-ph/0410160].[13] B. C. Allanach et al. [Beyond the Standard Model Working Group],arXiv:hep-ph/0402295; see the contribution by C. G. Lester, Section X.[14] C. G. Lester, M. A. Parker and M. J. White, “Determining SUSY model parametersand masses at the LHC using cross-sections, kinematic edges and other observables,”JHEP , 080 (2006) [arXiv:hep-ph/0508143].[15] N. Arkani-Hamed, G. L. Kane, J. Thaler and L. T. Wang, “Supersymmetry and theLHC inverse problem,” JHEP , 070 (2006) [arXiv:hep-ph/0512190].[16] J. M. Butterworth, J. R. Ellis and A. R. Raklev, “Reconstructing sparticle massspectra using hadronic decays,” arXiv:hep-ph/0702150.[17] B. C. Allanach et al. , “The Snowmass points and slopes: Benchmarks for SUSYsearches,” in Proc. of the APS/DPF/DPB Summer Study on the Future of ParticlePhysics (Snowmass 2001) ed. N. Graf,
In the Proceedings of APS / DPF / DPBSummer Study on the Future of Particle Physics (Snowmass 2001), Snowmass,Colorado, 30 Jun - 21 Jul 2001, pp P125 [arXiv:hep-ph/0202233].[18] D. J. Miller, P. Osland and A. R. Raklev, “Invariant mass distributions in cascadedecays,” JHEP , 034 (2006) [arXiv:hep-ph/0510356].[19] We particularly thank Dirk Zerwas for several detailed discussions and emails.[20] T. Gleisberg, S. Hoche, F. Krauss, A. Schalicke, S. Schumann and J. C. Winter, JHEP , 056 (2004) [arXiv:hep-ph/0311263].[21] T. Sjostrand, S. Mrenna and P. Skands, JHEP , 026 (2006)[arXiv:hep-ph/0603175].[22] W. Porod, Comput. Phys. Commun. , 275 (2003) [arXiv:hep-ph/0301101].[23] E. Richter-Was, D. Froidevaux and L. Poggioli, “ATLFAST 2.0: a fast simulationpackage for ATLAS”, Tech. Rep. ATL-PHYS-98-131 (1998); see also, H.T. Phillips,P.Clarke, E.Richter-Was, P.Sherwood, R.Steward[ http://root.cern.ch/root/Atlfast.html ].].