Moriond 2009, QCD and High Energy Interactions: Theory Summary
aa r X i v : . [ h e p - ph ] J u l MORIOND 2009, QCD AND HIGH ENERGY INTERACTIONS: THEORYSUMMARY
GAVIN P. SALAM
LPTHE, UPMC Univ. Paris 6, CNRS UMR 7589, Paris, France
These proceedings provide a brief summary of the theoretical topics that were covered atMoriond QCD 2009, including non-perturbative QCD, perturbative QCD at colliders, a smallcomponent of physics beyond the standard model and heavy-ion collisions.
Of the O (100) talks that were given at this year’s “Moriond QCD”, about one third weretheoretical. a As usual with the Moriond conference, the range of topics covered was ratherbroad, and the logic that I will follow in discussing them will be to progress in the total energythat is involved — that will take us from non-perturbative QCD, through perturbative QCDand the data-theory interface at high-energy colliders, to topics beyond the standard model, andfinally to heavy-ion collisions.
There are many reasons for investigating non-perturbative QCD. One good one is that it’sresponsible for most of the nucleon mass and correspondingly for most of the visible mass in theuniverse. A more pragmatic reason is that flavour physics is usually done with hadrons, and ourunderstanding of their non-perturbative dynamics is one of the limiting factors in the extractionof CKM matrix entries and new-physics constraints.A powerful tool for handling non-perturbative QCD is to simulate it on the lattice. Arecurrent issue for lattice QCD is the reliable handling of systematic errors, for example thedependence on the lattice spacing, the matching of lattice gauge theory to continuum QCD,finite-volume effects, and the treatment of light quarks, and the discussion of these issues was acommon theme to the lattice talks at Moriond ’09.Three light-quark treatments were discussed. Staggered fermions are the easiest to treatfrom a computational point of view, but this comes at a price: while the predictions agree withexperimental results, it is not clear whether the staggered-fermion formulation is theoreticallyequivalent to QCD. Wilson fermions and domain-wall fermions are both OK from this point ofview, but they are also more expensive computationally, especially the domain-wall fermions,which are those with the cleanest chiral ( m q →
0) limit. a Two thirds of those were in the afternoon, which means that at roughly 95% confidence level, we can rule outthe hypothesis that the organisers were equally likely to assign theory talks to morning and afternoon sessions.In contrast with the Tevatron’s 95% exclusion limits on the standard-model Higgs boson with 160 < m H < .00.51.01.52.0 ρ K * φ N Λ Σ Ξ ∆ Σ ∗ Ξ ∗ Ω vector meson octet baryon decuplet baryon mass [GeV] expermentPACS-CS Figure 1: Left: dependence of the Ω and nucleon masses on the pion mass and lattice spacing from the BMWcollaboration;2 middle: final results for the hadron mass spectrum from the BMW collaboration;2 right: resultsfor the spectrum from the PACS-CS collaboration based on a linear extrapolation from the region 156 MeV 410 MeV.3 Figure 1 (left) illustrates results from two talks about the hadron mass spectrum. The left-hand plot, presented by Fodor2 for the “BMW” collaboration,4 shows how the lattice calculationof the Ω and nucleon (N) masses depends on the squared pion mass (horizontal axis), i.e. theapproach to the correct u and d -quark masses, and on the lattice spacing a (differently colouredpoints), together with a fit that provides an extrapolation to the physical light-quark masses.4The corresponding results for the hadron spectrum are shown in the middle plot (lines and bandsare experimental masses and widths, the points are the lattice result), with remarkable agreementfor all the hadrons. This was presented as the first lattice-calculation of the baryon mass to havefull control of uncertainties. Related results were presented by Kuramashi3 for the PACS-CScollaboration (right-hand figure).5 He, however, argued that for a fully controlled calculation oneshould carry out simulation directly with the physical light-quark masses (currently in progress).This is to avoid the extrapolation that is required in the BMW results and whose validity wasthe subject of debate during the conference. Though there does not yet seem to a be a universalconsensus within the lattice community as to whether the hadron spectrum is now reliablycalculated, it is to be expected that clarification on the remaining issues will be forthcomingin the near future. Given the 35 years’ work on the subject, that is a major accomplishment,both in fundamental terms, and because it helps provide confidence when using lattice resultsfor observables for which we don’t already know the answer.The use of lattice calculation to obtain infor- Figure 2: B → πℓν form-factor as a functionof q , the invariant mass of the ℓν system, com-paring experimental measurements6 and latticecalculations.7 mation that we don’t know was illustrated in twoother talks. Izubuchi,8 representing the RBC/UKQCDcollaboration, showed numerous results, includinghadronic matrix element computations, and deter-minations of the up- and down-quark mass dif-ference, and emphasised the value of the contin-uum chiral behaviour that is characteristic of thedomain-wall fermions that were used.Van de Water,9 for the Fermilab Lattice andMILC collaborations (staggered fermions), discussedresults for B -mesons and their relation with thedetermination of CKM matrix elements. Fig. 2shows how data6 for the B → πℓν form factor fromBABAR and from the lattice calculation7 have thesame shape in the region of overlapping q values, helping to provide confidence in the latticecalculation and the extraction of V ub . The resulting value for V ub = (3 . ± . × − , whileit has errors (11%) that are slightly larger than inclusive determinations (7–8%),10 is in better 170 GeV /c ,1 it is, however, unclear quite what we learn from this! jet) [GeV] st (1 T p20 30 40 50 100 200 300 R a t i o t o M C F M N L O jet) [GeV] st (1 T p20 30 40 50 100 200 300 R a t i o t o M C F M N L O DataMCFM NLOScale unc. MCFM LOScale unc.0.51.01.52.0 jet) [GeV] st (1 T p20 30 40 50 60 100 200 300 jet) [GeV] st (1 T p20 30 40 50 60 100 200 300 | | jet) [GeV] ee) + 1 jet + X st → (1 ( T | * p γ −1 Z/ 20 30 40 50 100 200 300 R a t i o t o M C F M N L O jet) [GeV] st (1 T p20 30 40 50 100 200 300 R a t i o t o M C F M N L O DataALPGEN+PYTHIAScale unc. SHERPAScale unc.0.51.01.52.0D0 Run II, L=1.04 fb Figure 3: Ratio of measured Z+jet cross sections17 , 18 to the NLO prediction19 including its scale uncertainty,left, and compared to LO matrix-element plus parton-shower calculations,20 right (adapted from ref. 18). agreement with unitarity triangle analyses V ub = (3 . ± . × − .11With other observables it may currently be harder for lattice QCD to provide definitivepredictions. One context where this was discussed, by Penin,12 concerned the mass of therecently discovered13 η b , specifically the hyperfine mass splitting, measured to be E hfs ≡ M (Υ(1 S )) − M ( η b ) = 71 . ± . +2 . − . MeV. In the charmonium system, the experimental valueis well reproduced by a perturbative calculation,14 but this is not the case for the η b , wherethe prediction was E hfs = 39 ± +9 − ( δα s ) MeV. The lattice prediction15 is closer to the ex-perimental result, however Penin argued that the lattice’s coarse spacing relative to the inverse b -quark mass implies substantial additional corrections ( ∼ − 20 MeV), which would bring it intoaccord with the perturbative result. This leaves an interesting puzzle, perhaps to be resolved ata future Moriond!Another context where the question of the lattice’s predictive ability naturally arose was thetalk by Swanson16 about exotic hadronic states. An example that was particularly interesting(though it is unclear if it truly exists) was the Z ± (4430), which decays to π ± ψ ′ and so wouldcall for either a tetraquark or a molecular interpretation. As progress in lattice calculationscontinues, one can only look forward to the day when they will be able to shed light on theexistence and structures of the numerous X,Y,Z resonances that are currently being seen by theexperiments. Perturbative QCD (pQCD) inevitably “happens” at HERA, the Tevatron and LHC. Back-grounds to possible new physics all involve a QCD component, and more often than not, possiblesignals either involve QCD directly (e.g. because a new particle decays to quarks) or are affected,e.g. by pQCD initial-state radiation. A number of the pQCD results presented here related to next-to-leading order (NLO) cal-culations. The importance of NLO predictions was nicely illustrated by Nilsen (for the DØcollaboration17 , 18) in his comparisons of data for the Z +jet cross-section to LO (matrix-element+ parton-shower20) and NLO predictions,19 fig. 3. It is clear that it is only at NLO that onehas a reliable prediction. The usefulness of NLO predictions has led to the establishment ofthe so-called Les Houches wish-list of important processes to calculate at NLO, and this guidesmuch of the current work on the subject.21The NLO results reported here can be split into two categories: those that push tradi-tional Feynman-diagram based methods to their limit (J¨ager, Weinzierl), and those based onunitarity” (Melnikov, Maˆıtre) for the 1-loop part of the computation.J¨ager22 discussed pp → V V jj , via vector-boson fusion (VBF), which is an important back-ground to Higgs production via VBF, and of interest also for studying WW scattering. Sheshowed that the NLO corrections23 are modest, and lead to small scale-dependence in the finalpredictions, and illustrated how this might facilitate the identification of new physics signal inthe gauge sector. Weinzierl24 discussed p ¯ p → t ¯ tj production,25 one of the last uncalculated2 → m t . One of the interests of t ¯ tj pro-duction is that its LO contribution is the first order of t ¯ t production that shows an asymmetrybetween the t and ¯ t directions (jets are preferentially emitted when the t goes in the directionopposite to the p ). Curiously this asymmetry is largely washed out by higher order corrections,an effect that calls for a physical explanation.The bottleneck in NLO calculations for a 2 → n process is the 2 + n -leg loop calculation,whose complexity scales factorially with the number of legs in Feynman-diagrammatic methods.Much recent work has been devoted to the use of “unitarity”, first introduced for QCD loopcalculations over 15 years ago,21 which, essentially, involves sewing together tree-level amplitudeswith specific kinematics in order to obtain the coefficients of the loop integral. Both Maˆıtre26(for the Blackhat collaboration) and Melnikov27 (for the Rocket collaboration) reviewed theamazing progress that has taken place in recent years (see also ref. 21), significant innovationsincluding, among many others, the use of complex momenta,28 recursive building up of thenumber of legs,29 the determination of the full analytic structure of loop integrands based juston their numerical evaluation at a finite set of kinematic points,30 and extraction of results in4 + 2 ǫ dimensions from computations in integer D > N -gluon scattering amplitudes,32 q ¯ q + N -gluons, W q ¯ q + N g , W q ¯ qq ¯ q + N g , t ¯ t + N g and t ¯ tq ¯ q + N g . In terms of phenomenological applications, it seemsthat 2 → → N c limit, andsignificant work is now being devoted to the combination of the 2 → n → n +1 tree-level result (Blackhat uses Sherpa,33 Rocket uses MCFM19). Both groups showedfirst results for pp → W + 3jets, one of the major 2 → N c limit (which should be good toa few percent), and in the case of Rocket with just the W q ¯ qggg subprocess and without fermionloops (good to 20 − b These developments represent a major step forward and the start of a new era in practicalNLO calculations for the LHC and one can almost certainly expect significant progress on theremaining technical issues in the coming year or two. Plain NLO calculations are not the only means available to us for obtaining predictions atcolliders and a number of varyingly related methods were presented at this year’s Moriond.It can be useful to combine NLO predictions with a parton-shower Monte Carlo simulation.White40 discussed this in the context of MC@NLO41 , 42 for pp → W t production. An issue that b A vexing issue here is that the data have been obtained with the JetClu jet algorithm, which is severely IRunsafe and causes even the LO perturbative prediction to be ill-defined.35 To obtain finite NLO predictions, theBlackhat group instead used the SISCone jet algorithm.36 It would probably be worth supplementing this with acalculation that uses an alternative such as anti- k t ,37 insofar as JetClu is probably intermediate in its behaviourbetween anti- k t and SISCone, once one accounts for the all-orders perturbative and non-perturbative impact ofthe IR unsafety of JetClu. -3 -2 -1 d σ / d E T [ pb / G e V ] LONLOCDF data 20 30 40 50 60 70 80 90 Third Jet E T [ GeV ] LO / NLOCDF / NLO NLO scale dependence W + 3 jets BlackHat+Sherpa LO scale dependence -4 -3 -2 -1 d σ W + + j e t s / d H T [ pb ] LONLO0.40.81.2 200 400 600 800 1000 1200 K Η Τ [GeV]180210240270300330360 σ W + j e t s [ pb ] LHC LONLO0.60.81.01.280 120 160 200 240 K µ Figure 4: Left: results38 from Blackhat for the W + 3jet cross section as function of the p t of the softest of the 3jets, compared to CDF data;34 right: predictions for the H T variable at the LHC from the Rocket program (top)and the scale dependence of the LHC cross section for W +3 jets with p t > 50 GeV (below).39 arises at NLO is the appearance of the pp → W t ¯ b process, which interferes with non-resonant t ¯ t production. This is a non-trivial problem, and to have a solution that allows pp → W t to beincorporated in MC@NLO is a very useful developmentPart of the interest of parton showers is that they resum logarithmically enhanced terms toall orders. The best resummation precision is, however, to be obtained with analytic calculations,which were discussed by Ferrera43 for the p t distribution of a Z/γ ∗ system. A context for thisis that the p t distribution for the Higgs boson (which is calculated in a similar way), is animportant ingredient in Higgs searches, and it is valuable to be able validate the calculationalframework for predicting this, which is very similar in the Higgs and the Z cases.Resummations may also be relevant in predicting the structure of multi-jet events. Normallymulti-jet predictions are based on tree-level calculations, but it was pointed out by Andersen44that in the case of Higgs plus multijet production, it is technically difficult to obtain exactpredictions for multijet prediction. He thus discussed an interesting approach45 based on theFadin-Kuraev-Lipatov high-energy approximation,46 which compares well to exact tree-level cal-culations in the cases where they are known. This is an interesting complement to normal fixedorder methods, in part also because it provides a natural way of including virtual corrections.The relevance of the high-energy approximation was also emphasised by Hautmann,47 becauseof the expected relevance at LHC of configurations in which multiple emissions may have com-mensurate transverse momenta (by default not included in parton shower Monte Carlos).Rather than trying to calculate all orders in some logarithmic approximation, one can alsotry to obtain just one order further than NLO, i.e. NNLO. Work towards an efficient program forfully exclusive NNLO prediction of pp → Z was presented by Ferrera.43 Theoretical developmentswere discussed by Heslop48 on the calculation of two-loop diagrams (one of the ingredients ofNNLO predictions) for a theory related to QCD, N = 4 supersymmetric (SUSY) Yang-Mills(YM) theory, specifically for maximal-helicity-violating (MHV) amplitudes. That large numberof acronyms is indicative of how distant this is from a general full QCD calculation. Yet theprogress made is impressive. In particular, Heslop discussed a conjecture that relates gluon loopamplitudes to Wilson loops, and showed that if it holds, then one can calculate all planar two-loop MHV n-gluon scattering amplitudes in N = 4 SUSY-YM for any number of gluon legs n .49This, for two-loop diagrams, is analogous to the type of progress that was being made 15 yearsago for one-loop diagrams50 and that recently has been playing a big role in NLO calculations, [GeV] H M 100 150 200 250 300 χ ∆ L EP exc l u s i on a t % C L T eva t r on exc l u s i on a t % C L σ σ σ Theory uncertaintyFit including theory errorsFit excluding theory errors [GeV] H M 100 150 200 250 300 χ ∆ G fitter SM M a r Figure 5: Left: the ∆ χ (with respect to the minimum) for the Gfitter52 fit53 to the latest electroweak precisiondata, including direct searches. Right: fits for the up-quark distribution from the MRST/MSTW and NNPDFgroups with full data, and “benchmark” versions with a reduced dataset.54 as described in section 3.1.In discussing perturbative predictions for high-energy colliders, it is important to rememberthat non-perturbative effects can often be as large as higher-orders of perturbation theory. This isespecially true when it comes to the underlying event and pileup at the LHC, and the simulationof these effects was discussed by Pierog,51 in the context of the EPOS Monte Carlo programfor minimum-bias physics, including the question of how one can incorporate constraints fromcosmic-ray air showers in the modelling of minimum-bias collisions. Work at the interface between data and theory is crucial if we are to make the best possible useof both. The topics that fell under this heading were rather varied.Stelzer53 discussed the Gfitter project52 for electroweak fits of the standard model (andbeyond). It can be seen as an alternative to a tool like against Zfitter,55 and has also beenvalidated against it. Stelzer quoted a central value for the Higgs mass of 83 +30 − GeV, to becompared with that from the Tevatron’s electroweak fit of m H = 90 +36 − GeV (small details ofthe fit are responsible for the difference in results). Including the latest results for the directHiggs searches gives m H = 116 +15 . − . GeV, with the χ as a function of m H shown in fig. 5 (left).Still on the subject of using data to constrain theory, Williams56 discussed a program calledHiggsBounds,57 which incorporates results of all experimental Higgs-boson searches into a singlepackage. The list of searches that are included in the program (too long to reproduce here)makes for an impressive and valuable collation of information. It can be useful for testing newmodels (and there is a convenient web interface), or even new standard-model cross sections,and Williams illustrated how it had been used to show that a previous 95% exclusion limit onthe Higgs boson from the Tevatron disappeared once one used updated PDFs.The question of PDFs is one that arises in many places, not surprisingly given how crucialan input they are for Tevatron and LHC studies. A major issue in standard PDF fits is thedetermination of the uncertainties. The two main groups, CTEQ and MSTW, both estimatethem using a δχ of order 50. However reasonable the final results, one can’t but help feeling alittle uncomfortable with this choice. A second issue is that standard fits use somewhat restrictedparametrisations, which may bias the final results. An approach that attempts to work aroundthese issues was presented by Del Debbio,58 for the NNPDF collaboration.59 One innovation isthat they carry out individual fits to a large number of Monte Carlo replica experiments so asto obtain an ensemble of PDFs (i.e. a direct measure of uncertainties, without needing to choose δχ value). Additionally, they use neural networks to provide bias-free parametrisations ofthe PDFs. Fig. 5 (right) shows results of fits for the up-quark distribution compared to MRSTresults. There are two fits each, one using a full data set and the other a reduced “benchmark”data set.54 Ideally, the original fit should be within the error band for the benchmark fit, andthe latter should have significantly larger errors in the region lacking data. This is the case forNNPDF, but less so for MRST, perhaps a consequence of the in-built parametrisation, whichprovides a constrained extrapolation into the region with limited data. At the moment theNNPDF fit lacks heavy-quark effects and p ¯ p data, which limits its usefulness, however work isin progress to resolve these issues. Once this is done, it seems likely that the NNPDF approachwill become a serious competitor to the CTEQ and MSTW groups.As well as using data to learn more about theory, Mass (GeV) - E ve n t s / G e V / f b Mass (GeV) - E ve n t s / G e V / f b V+jetsVVV+Higgs = 2.1S/in 112-128GeV (a) Mass (GeV) - E ve n t s / G e V / f b Mass (GeV) - E ve n t s / G e V / f b V+jetsVVV+Higgs = 3.1S/in 112-128GeV (b) Mass (GeV) - E ve n t s / G e V / f b Mass (GeV) - E ve n t s / G e V / f b V+jetsVVV+Higgs = 2.9S/in 112-128GeV (c) Mass (GeV) - E ve n t s / G e V / f b Mass (GeV) - E ve n t s / G e V / f b qqV+jetsVVV+Higgs = 4.5BS/in 112-128GeV (d) Figure 6: Distribution of the mass for tagged,high- p t b ¯ b jets with appropriate substructure,in events also pass a leptonic and missing- E T (W/Z) cut, for a Higgs boson mass of115 GeV.60 one can also take the reverse approach and ask howtheoretical insight can be exploited to better use data.This was illustrated in the talk by Rubin61 about a pro-posal for a new LHC search strategy for a light Higgsboson that decays to b ¯ b .60 In an ATLAS study62 of the pp → HW , H → b ¯ b and W → lν channel, it was foundthat the signal to background ratio was very low, as wasthe significance, with the signal only a tiny perturba-tion close to a peak in the background distribution. Ru-bin concentrated on the subset of W H events where the W and H both have high transverse momenta. Thoughonly a small fraction of WH events are in this config-uration, it turns out the fraction for the backgroundevents is smaller still. One challenge in then that the H → b ¯ b decay is quite collimated and the b ¯ b may endup in the same jet. However, using a QCD-motivateddedicated subjet ID strategy, this issue can be resolvedand, based on Monte Carlo study, one expects that with 30 fb − one could obtain a 4 − σ significance for discovery of a 115 GeV Higgs boson at a 14 TeV LHC, cf. fig. 6.The issue of very small signal to background ratios is one that is common to many of theexperimental analyses discussed at this year’s Moriond QCD. In nearly all the cases the analysesused a neural network (NN), or some other multi-variate technique, to obtain a measure of howmuch a given event is “signal-like” versus background-like, and then showed the distribution forthis measure as their main result.In informal discussions during the conference, many people expressed discomfort at thistrend (myself included). It is natural of course to seek to use the best tools at hand in orderto maximise one’s chances of seeing a signal. However, ultimately, the aim is not merely to getthe largest possible value for some number such as S/ √ B , but, just as importantly, to convincethe reader/audience that one has actually seen (or excluded) a signal. From this point of view,neural networks may actually be a hindrance, because they fail to communicate what it is abouta certain set of events that leads one to believe that they correspond to a signal. One can perhapsmitigate this drawback, to some extent, by showing the correlation between the neural networkoutput and various physical distributions for background and signal. However, a suggestion fora more general rule of thumb might be the following: if a NN improves the signal significanceby (say) 20% compared to a cut-based analysis, then one should also show the latter, because itis likely to be just as convincing (if not more so). If, instead, the NN improves the significanceby a factor of two, then this suggests that there is some underlying physical characteristic of thesignal that could be used also to improve a traditional analysis, and one should figure that out. c c Another way of saying this is that one ought not to excessively favour silicon-based neural networks over their Beyond the Standard Model Two kinds of “New phenomena” were discussed at this year’s Moriond. Those that relate totheories that we know well (QCD), but that may have yet-to-be discovered exotic behaviour, forexample in heavy-ion collisions, as discussed below; and those that relate to extensions of thestandard model.One of the issues with the most popular extension of the standard model, supersymmetry(SUSY), is that of how this extra symmetry between fermions and bosons gets broken. Variousschemes exist in the literature, such as gravity-mediated SUSY breaking, or gauge-mediatedSUSY breaking. Lalak63 pointed out that, contrary to standard assumptions, it is possible thatreal-world SUSY breaking could be a mixture of these.SUSY is far from being the only viable extension of the Standard Model. Kanemura64discussed a specific model65 in which the dynamics of an extended Higgs sector and TeV-scaleright-handed neutrinos provide a framework for neutrino oscillation, dark matter, and baryonasymmetry of the Universe. In particular tiny, physical, neutrino masses are generated at thethree loop level, a singlet scalar field is a candidate of dark matter, and a strong first-order phasetransition is realised for successful electroweak baryogenesis.One of the most economical ideas for the explaining the electroweak scale involves the ideathat the Higgs is composite (a bit like pion), with its mass generated by non-perturbativedynamics of a new QCD-like theory, technicolour, but whose coupling grows strong near 1 TeVrather than near 1 GeV. Technicolour is often considered to be difficult to reconcile with precisionelectroweak measurements, but as was discussed by Brower,66 this is based on calculations thatassume that technicolour is similar to QCD. If one instead supposes that technicolour is onlymarginally similar to QCD (e.g. it has many more active flavours, with a small β -functioncoefficient), then it might be a rather different theory. Would this too then be excluded? Theonly way of being sure would be through lattice calculations. In this respect, the fact (cf.section 2) that we are finally reaching an era of full control over the systematics of latticecalculations means that we might also be to use them to reliably address questions like thisabout technicolour. The key question in the study of heavy-ion collisions (HIC) is that of whether we can understandthe “medium” that is produced in such a collision. Ways of addressing this question includedirect modelling/calculation of the medium, and the use of probes that traverse it to measureits characteristics.Direct calculation can be performed with lattice calculations at finite temperature (albeitonly for equilibrium, static media, i.e. an idealisation of what is to be found in true HICs).Schmidt67 described a lattice calculation68 of the equation of state for the medium. Oneinteresting result was for the Taylor expansion of the pressure as a function of the ratio of quarkchemical potential µ uds to temperature T , pT = 1 V T ln Z ( V, T, µ u , µ d , µ s ) = X ijk c u,d,si,j,k (cid:16) µ u T (cid:17) i (cid:16) µ d T (cid:17) j (cid:16) µ s T (cid:17) k (1)The fourth-order Taylor coefficient, c u is shown as a function of temperature in fig. 7, andone sees a clear peak near 200 MeV. This, together with the behaviour of the other expansioncoefficients, hints at the existence of a critical point at that temperature — something that theexperiments may look for explicitly in their data. carbon-based cousins.igure 7: Left: the fourth-order coefficient of the Taylor expansion of the equation of state in lattice simulationsof finite temperature QCD,67 , 68 with structure at T ≃ 200 MeV that is suggestive of a critical point. Right: theangular distribution of particles in that are produced from the showering of a 100 GeV gluon in the vacuum, andin a medium with transport coefficient ˆ q = 5 , 50 GeV / fm, as simulated with Q-Pythia.69 Greiner70 discussed a microscopic approach to the quark-gluon plasma, the “Boltzmann Ap-proach of MultiParton Scatterings” (BAMPS), a transport algorithm that solves the Boltzmannequations for on-shell partons with perturbative QCD interactions, essentially including a 2 → → v (a non-trivial achievement),71 but significantlyoversuppresses high- p t particle production, giving R AA ≃ . 05 as opposed to the experimentalvalue of R AA ≃ . 2. So, while a microscopic description can provide much insight, it seems thatit remains a challenge to describe the entire body of data.A complementary approach to the calculation of medium properties is to investigate theinfluence of the medium on probes that traverse it, so as to obtain a measure of its properties(albeit a somewhat indirect one).One classic probe is the J/ψ , on the grounds that in a hot medium it would “melt” and soits production would be suppressed. Ferreiro72 pointed out that to make sense of AA data, itis important73 to understand not only the high-temperature effects, but also phenomena suchas nuclear shadowing that occur even in cold nuclear matter. Another “early-time” probe wasdiscussed by Kerbikov,74 specifically π Ξ correlations, of interest because models predict an earlydecoupling of multi-strange hadrons like the Ξ.A probe that has seen very extensive study in recent years is a hard parton that traversesthe medium. The main indicator that has been discussed so far is the amount of energy lostduring this traversal, which has been modelled in terms of medium-enhanced radiative energyloss, as well as collisional energy loss.Zakharov75 discussed an additional source of energy loss. He argued that certain modelsof the quark-gluon plasma, such as “anomalous viscosity”76 imply the presence of chromo-magnetic fields that are sufficient to induce substantial synchrotron radiation from a gluon thatgoes through them.77 He estimated that synchrotron energy loss should be of similar magnitudeto collisional energy loss, and each of them about 25% of radiative energy loss.The fact that many mechanisms may contribute to parton energy loss motivates more ex-clusive studies, which look not just at leading particle spectra, but the properties of particleand energy flow in the vicinity of a leading particle. To help interpret such studies, it is essen-tial to have more exclusive modelling tools, such as “medium-aware” Monte Carlo generators.Salgado78 discussed a modification of Pythia,79 Q-Pythia,69 that incorporates an additionalmedium-induced gluon emission term in the parton shower. Fig. 7 (right) shows the angularistribution of particles emitted from a 100 GeV gluon, both in the vacuum and in a medium,and illustrates the significant differences that are to be seen, both in multiplicity and in typicalangle. This kind of tool promises to be very useful, both in testing the underlying modelling,and in designing experimental analyses to further probe the mechanism of jet quenching.The final talk of the conference, by Warringa,80 discussed the effects of topological chargechange in heavy-ion collisions.81 He argued for the following chain of events: 1) there are topo-logical charge fluctuations in the hot medium (like instantons); 2) that topological charge fluc-tuations induce fluctuations in chirality, e.g. more right-handed quarks and antiquarks (i.e. withthe spin aligned along the direction of motion) than left-handed ones; 3) that if there is a (QED)magnetic field in the medium, this will orient the spins of the u L , ¯ d L , u R , ¯ d R quarks parallel (andthe others anti-parallel) to the direction of the magnetic field; 4) that together with a fluctuationin chirality, (say more R), this will lead to u R ¯ d R (positive charge) moving in the direction ofthe field, and ¯ u R d R (negative charge) moving in the opposite direction; 5) that in a non-centralAA collision there is a (QED) magnetic field, perpendicular to the reaction plane, generated asthe two charged nuclei go past each other, and therefore that this orients net charge flow alonga direction perpendicular to the reaction plane; 6) that this can be seen in AA collisions, byplotting a variable h cos( φ ± i + φ ± , ∓ j − RP ) i (where Ψ RP is the angle of the reaction plane)as a function of centrality. Remarkably, a preliminary STAR measurement82 shows that thisquantity becomes significantly different from zero (in the direction expected) for the least cen-tral collisions, precisely those that are expected to have the strongest magnetic field. Given howlong the theoretical community has been discussing topological charge in SU(N) theories, thisis a very interesting development. One can only look forward to further cross-checks, and oneinteresting one would for example be the experimental verification of appropriate scaling withnuclear charge Z for constant A . This summary would not have been possible without the many patient explanations that nu-merous Moriond speakers were kind enough to provide me with, both about their own talks andabout the more general questions of their sub-fields. That said, the inevitable errors, misun-derstandings or misappreciations that have made their way into these proceedings are entirelymine.I am also grateful to Luigi Del Debbio and Carlos Salgado for helpful comments on themanuscript, as well as to Al Mueller for a useful remark.Finally, it would not possible to close without also thanking the Moriond organisers — forthe invitation to give this summary (and financial support), for their excellent organisation ofthe conference, and for the “Spirit of Moriond” that has made this series of meetings such asuccess over the several decades of its existence. References 1. S. Pagan Griso, these proceedings, arXiv:0905.2090 [hep-ex].2. Z. Fodor, these proceedings.3. Y. Kuramashi, these proceedings, arXiv:0906.0126 [hep-lat].4. S. Durr et al. , Science (2008) 1224.5. K. I. Ishikawa et al. [PACS-CS Collaboration and PACS-CS Collaboration and PACS-CSCollaborat], arXiv:0905.0962 [hep-lat].6. B. Aubert et al. [BABAR Collaboration], Phys. Rev. Lett. (2007) 091801.7. J. A. Bailey et al. , arXiv:0811.3640 [hep-lat].8. T. Izubuchi, these proceedings.. R. Van de Water, these proceedings.10. F. Di Lodovico (HFAG), update presented at ICHEP 2008, .11. J. Charles et al. (CKMfitter), preliminary results for Summer 2008, http://ckmfitter.in2p3.fr/plots_Summer2008/ckmEval_results.html . [98] L. Sil-vestrini (UTFit), preliminary result presented at Lattice 2008, .12. A. A. Penin, these proceedings, arXiv:0905.4296 [hep-ph].13. B. Aubert et al. [BABAR Collaboration], Phys. Rev. Lett. (2008) 071801 [Erratum-ibid. (2009) 029901].14. B. A. Kniehl, A. A. Penin, A. Pineda, V. A. Smirnov and M. Steinhauser, Phys. Rev. Lett. (2004) 242001.15. A. Gray et al., Phys. Rev. D (2005) 094507.16. E. Swanson, these proceedings.17. H. Nilsen, these proceedings, arXiv:0906.0229 [hep-ex].18. V. M. Abazov et al. [D0 Collaboration], arXiv:0903.1748 [hep-ex].19. J. M. Campbell and R. K. Ellis, Phys. Rev. D (2002) 113007.20. S. Hoche, F. Krauss, N. Lavesson, L. Lonnblad, M. Mangano, A. Schalicke and S. Schu-mann, arXiv:hep-ph/0602031.21. Z. Bern et al. [NLO Multileg Working Group], arXiv:0803.0494 [hep-ph].22. C. Englert, B. Jager, M. Worek and D. Zeppenfeld, these proceedings, arXiv:0904.2119[hep-ph].23. B. Jager, C. Oleari and D. Zeppenfeld, JHEP (2006) 015; G. Bozzi, B. Jager,C. Oleari and D. Zeppenfeld, Phys. Rev. D (2007) 073004.24. S. Dittmaier, P. Uwer and S. Weinzierl, these proceedings, arXiv:0905.2299 [hep-ph].25. S. Dittmaier, P. Uwer and S. Weinzierl, Phys. Rev. Lett. (2007) 262002.26. C. F. Berger et al. , these proceedings, arXiv:0905.2735 [hep-ph].27. K. Melnikov, these proceedings.28. R. Britto, F. Cachazo and B. Feng, Nucl. Phys. B (2005) 275.29. C. F. Berger, Z. Bern, L. J. Dixon, D. Forde and D. A. Kosower, Phys. Rev. D (2006)036009.30. G. Ossola, C. G. Papadopoulos and R. Pittau, Nucl. Phys. B (2007) 147.31. W. T. Giele, Z. Kunszt and K. Melnikov, JHEP (2008) 049.32. W. T. Giele and G. Zanderighi, JHEP , 038 (2008).33. T. Gleisberg, S. Hoche, F. Krauss, M. Schonherr, S. Schumann, F. Siegert and J. Winter,JHEP , 007 (2009).34. T. Aaltonen et al. [CDF Collaboration], Phys. Rev. D , 011108 (2008).35. G. P. Salam, Acta Phys. Polon. Supp. , 455 (2008).36. G. P. Salam and G. Soyez, JHEP , 086 (2007).37. M. Cacciari, G. P. Salam and G. Soyez, JHEP , 063 (2008).38. C. F. Berger et al. , arXiv:0902.2760 [hep-ph].39. R. K. Ellis, K. Melnikov and G. Zanderighi, JHEP (2009) 077.40. C. D. White, these proceedings, arXiv:0905.2066 [hep-ph].41. S. Frixione and B. R. Webber, JHEP , 029 (2002).42. S. Frixione, E. Laenen, P. Motylinski, B. R. Webber and C. D. White, JHEP (2008)029.43. G. Ferrera, these proceedings.44. J. R. Andersen, these proceedings.45. J. R. Andersen, V. Del Duca and C. D. White, JHEP (2009) 015.46. E. A. Kuraev, L. N. Lipatov and V. S. Fadin, Sov. Phys. JETP (1976) 443 [Zh. Eksp.Teor. Fiz. (1976) 840].7. F. Hautmann, these proceedings.48. P. Heslop, these proceedings.49. C. Anastasiou, A. Brandhuber, P. Heslop, V. V. Khoze, B. Spence and G. Travaglini,arXiv:0902.2245 [hep-th].50. Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower, Nucl. Phys. B (1994) 217,Nucl. Phys. B (1995) 59.51. T. Pierog, these proceedings, arXiv:0906.1459 [hep-ph].52. H. Flacher, M. Goebel, J. Haller, A. Hocker, K. Moenig and J. Stelzer, Eur. Phys. J. C (2009) 543.53. J. Stelzer, these proceedings.54. M. Dittmar et al. , arXiv:0901.2504 [hep-ph].55. A. B. Arbuzov et al. , Comput. Phys. Commun. (2006) 728.56. P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein and K. E. Williams, these proceedings,arXiv:0905.2190 [hep-ph].57. P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein and K. E. Williams, arXiv:0811.4169[hep-ph].58. L. Del Debbio, these proceedings.59. R. D. Ball et al. [NNPDF Collaboration], Nucl. Phys. B (2009) 1 [Erratum-ibid. B (2009) 293].60. J. M. Butterworth, A. R. Davison, M. Rubin and G. P. Salam, Phys. Rev. Lett. (2008)242001.61. M. Rubin, these proceedings, arXiv:0905.2124 [hep-ph].62. ATLAS, Detector physics performance technical design report, CERN/LHCC/99-14/15(1999).63. Z. Lalak, these proceedings.64. M. Aoki, S. Kanemura and O. Seto, these proceedings, arXiv:0905.3958 [hep-ph].65. M. Aoki, S. Kanemura and O. Seto, Phys. Rev. Lett. (2009) 051805; arXiv:0904.3829[hep-ph].66. R. Brower, these proceedings.67. C. Schmidt, these proceedings.68. M. Cheng et al. , Phys. Rev. D (2009) 074505.69. N. Armesto, L. Cunqueiro and C. A. Salgado, arXiv:0809.4433 [hep-ph].70. C. Greiner, these proceedings.71. Z. Xu and C. Greiner, Phys. Rev. C (2009) 014904.72. E. Ferreiro, these proceedings.73. L. V. Bravina, K. Tywoniuk, A. Capella, E. G. Ferreiro, A. B. Kaidalov and E. E. Zabrodin,arXiv:0902.4664 [hep-ph].74. B. Kerbikov, these proceedings.75. B. G. Zakharov, these proceedings.76. M. Asakawa, S. A. Bass and B. Muller, Phys. Rev. Lett. (2006) 252301.77. B. G. Zakharov, JETP Lett. (2008) 475.78. N. Armesto, L. Cunqueiro and C. A. Salgado, these proceedings, arXiv:0906.0754 [hep-ph].79. T. Sjostrand, L. Lonnblad, S. Mrenna and P. Skands, arXiv:hep-ph/0308153.80. H. Warringa, these proceedings.81. D. E. Kharzeev, L. D. McLerran and H. J. Warringa, Nucl. Phys. A803