Juan Rojo
University of Milan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Juan Rojo.
Nuclear Physics | 2011
Richard D. Ball; Valerio Bertone; Francesco Cerutti; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Juan Rojo; Maria Ubiali
Abstract We present a determination of the parton distributions of the nucleon from a global set of hard scattering data using the NNPDF methodology including heavy quark mass effects: NNPDF2.1. In comparison to the previous NNPDF2.0 parton determination, the dataset is enlarged to include deep-inelastic charm structure function data. We implement the FONLL-A general-mass scheme in the FastKernel framework and assess its accuracy by comparison to the Les Houches heavy quark benchmarks. We discuss the impact on parton distributions of the treatment of the heavy quark masses, and we provide a determination of the uncertainty in the parton distributions due to uncertainty in the masses. We assess the impact of these uncertainties on LHC observables by providing parton sets with different values of the charm and bottom quark masses. Finally, we construct and discuss parton sets with a fixed number of flavors.
Nuclear Physics | 2009
Richard D. Ball; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Andrea Piccione; Juan Rojo; Maria Ubiali
Abstract We use recent neutrino dimuon production data combined with a global deep-inelastic parton fit to construct a new parton set, NNPDF1.2, which includes a determination of the strange and antistrange distributions of the nucleon. The result is characterized by a faithful estimation of uncertainties thanks to the use of the NNPDF methodology, and is free of model or theoretical assumptions other than the use of NLO perturbative QCD and exact sum rules. Better control of the uncertainties of the strange and antistrange parton distributions allows us to reassess the determination of electroweak parameters from the NuTeV dimuon data. We perform a direct determination of the | V cd | and | V cs | CKM matrix elements, obtaining central values in agreement with the current global CKM fit: specifically we find | V cd | = 0.244 ± 0.019 and | V cs | = 0.96 ± 0.07 . Our result for | V cs | is more precise than any previous direct determination. We also reassess the uncertainty on the NuTeV determination of sin 2 θ W through the Paschos–Wolfenstein relation: we find that the very large uncertainties in the strange valence momentum fraction are sufficient to bring the NuTeV result into complete agreement with the results from precision electroweak data.
Nuclear Physics | 2011
Richard D. Ball; Valerio Bertone; Francesco Cerutti; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Juan Rojo; Maria Ubiali
We present a method for incorporating the information contained in new datasets into an existing set of parton distribution functions without the need for refitting. The method involves reweighting the ensemble of parton densities through the computation of the chi-square to the new dataset. We explain how reweighting may be used to assess the impact of any new data or pseudodata on parton densities and thus on their predictions. We show that the method works by considering the addition of inclusive jet data to a DIS+DY fit, and comparing to the refitted distribution. We then use reweighting to determine the impact of recent high statistics lepton asymmetry data from the D0 experiment on the NNPDF2.0 parton set. We find that the D0 inclusive muon and electron data are perfectly compatible with the rest of the data included in the NNPDF2.0 analysis and impose additional constraints on the large-x d/u ratio. The more exclusive D0 electron datasets are however inconsistent both with the other datasets and among themselves, suggesting that here the experimental uncertainties have been underestimated.
Journal of High Energy Physics | 2010
Richard D. Ball; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Juan Rojo; Maria Ubiali
The extraction of robust parton distribution functions with faithful errors requires a careful treatment of the uncertainties in the experimental results. In particular, the data sets used in current analyses each have a different overall multiplicative normalization uncertainty that needs to be properly accounted for in the fitting procedure. Here we consider the generic problem of performing a global fit to many independent data sets each with a different overall multiplicative normalization uncertainty. We show that the methods in common use to treat multiplicative uncertainties lead to systematic biases. We develop a method which is unbiased, based on a self-consistent iterative procedure. We then apply our generic method to the determination of parton distribution functions with the NNPDF methodology, which uses a Monte Carlo method for uncertainty estimation.
Physics Letters B | 2010
Fabrizio Caola; Stefano Forte; Juan Rojo
Abstract We search for deviations from next-to-leading order QCD evolution in HERA structure function data. We compare to data predictions for structure functions in the small x region, obtained by evolving backwards to low Q 2 the results of a parton fit performed in the large Q 2 region, where fixed-order perturbative QCD is certainly reliable. We find evidence for deviations which are qualitatively consistent with the behaviour predicted by small x perturbative resummation, and possibly also by non-linear evolution effects, but incompatible with next-to-next-to-leading order corrections.
Nuclear Physics | 2011
Fabrizio Caola; Stefano Forte; Juan Rojo
Abstract We examine critically the evidence for deviations from next-to-leading order perturbative DGLAP evolution in HERA data. We briefly review the status of perturbative small-x resummation and of global determinations of parton distributions. We show that the geometric scaling properties of HERA data are consistent with DGLAP evolution, which is also strongly supported by the double asymptotic scaling properties of the data. However, backward-evolution of parton distributions into the low x, low Q 2 region does show evidence of deviations between the observed behaviour and the next-to-leading order predictions. These deviations cannot be explained by missing next-to-next-to-leading order perturbative terms, and are consistent with perturbative small-x resummation.
Physics Letters B | 2011
Simone Lionetti; Richard D. Ball; Valerio Bertone; Francesco Cerutti; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Juan Rojo; Maria Ubiali
Abstract We determine the strong coupling α s from a next-to-leading order analysis of processes used for the NNPDF2.1 parton determination, which includes data from neutral and charged current deep-inelastic scattering, Drell–Yan and inclusive jet production. We find α s ( M Z ) = 0.1191 ± 0.0006 exp , where the uncertainty includes all statistical and systematic experimental uncertainties, but not purely theoretical uncertainties, which are expected to be rather larger. We study the dependence of the results on the dataset, by providing further determinations based respectively on deep-inelastic data only, and on HERA data only. The deep-inelastic fit gives the consistent result α s ( M Z ) = 0.1177 ± 0.0009 exp , but the result of the HERA-only fit is only marginally consistent. We provide evidence that individual data subsets can have runaway directions due to poorly determined PDFs, thus suggesting that a global dataset is necessary for a reliable determination.
arXiv: High Energy Physics - Phenomenology | 2009
Juan Rojo; Richard D. Ball; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Andrea Piccione; Maria Ubiali
We present recent progress within the NNPDF parton analysis framework. After a brief review of the results from the DIS NNPDF analysis, NNPDF1.0, we discuss results from an updated analysis with independent parametrizations for the strange and anti-strange distributions, denoted by NNPDF1.1. We examine the phenomenological implications of this improved analysis for the strange PDFs. Introduction PDFs and their associated uncertainties will play a crucial role in the full exploitation of the LHC physics potential. However, it is known that the standard approach to PDF determination [1, 2] suffers from several drawbacks, mainly related to the lack of control on the bias introduced in the choices of specific PDF parametrizations and flavour assumptions, as well as to the difficulty in providing a consistent statistical interpretation of PDFs uncertainties in the presence of incompatible data. Motivated by this situation, a novel method has been introduced which combines a MC sampling of experimental data with neural networks as unbiased interpolators for the PDF parametrization. This method, proposed by the NNPDF Collaboration, was first successfully applied to the parametrization of DIS structure functions [3, 4] and more recently to the determination of PDFs [5,6]. In this contribution we present recent results within this NNPDF analysis framework. The NNPDF1.0 analysis NNPDF1.0 [6] was the first DIS PDF analysis from the NNPDF Collaboration. The experimental dataset used in the NNPDF1.0 analysis consists of all relevant fixed target and collider deep-inelastic scattering data: structure functions from NMC, SLAC and BCDMS, CC and NC reduced cross-sections from HERA, direct FL(x,Q) from H1 and neutrino CC reduced cross sections from CHORUS. In NNPDF1.0, five PDFs are parametrized with neural networks at the initial evolution scale, which is taken to be Q20 = m 2 c = 2 GeV : Σ(x,Q20), V (x,Q 2 0) ≡ (uv+dv+sv)(x,Q 2 0), T3(x,Q 2 0) ≡ (u+ ū− d− d)(x,Q 2 0), ∆S(x,Q 2 0) ≡ (d− ū)(x,Q 2 0), and g(x,Q 2 0). The strange distributions are fixed by the additional assumption: s(x,Q20) = s(x,Q 2 0) = CS/2 ( ū(x,Q20) + d(x,Q 2 0) ) . (1) The fraction of (symmetric) strange over non-strange sea is taken to be CS = 0.5, as suggested by di-muon data. While recent analysis (see [7] and references therein) suggest a somewhat smaller central value, Eq. 1 is a very crude approximation and therefore uncertainties in CS are expected to be rather large, as the new NNPDF1.1 analysis confirms below. The overall normalization of g(x),∆S(x) and g(x) is fixed by imposing the momentum and valence sum rules. The NNPDF NLO evolution program employs a hybrid N-space and xspace method [5], whose accuracy has been checked with the Les Houches benchmark tables [8], obtained from a comparison of the HOPPET [9] and PEGASUS [10] evolution programs. The NNPDF1.0 gluon and singlet PDFs are shown in Fig. 1, compared with the results of other sets. We observe that our analysis produces results consistent with those obtained by other collaborations [1, 2] while our error bands tend to get bigger in the region where data do not constrain PDFs behavior. Interestingly, this happens without any error blow-up from the use of large tolerance factors [1, 2] in the PDF error definition. The NNPDF1.1 analysis NNPDF1.1 is an update of the previously described NNPDF1.0 analysis which introduces independent parametrizations in the strange PDF sector and a randomization of the preprocessing. The motivations for this update are twofold. First of all, the stability analysis of [6], were the preprocessing exponents were varied their optimal values, indicated that uncertainties might have been slightly underestimated for some PDFs in some restricted x−regions, like for example the valence PDF in the large-x region. On the other hand, the restrictive assumptions on the strange distributions Eq. 1 should also lead to an uncertainty underestimation for some PDFs and some observables, especially those directly sensitive to the strange sector. Instead of the simplified assumptions in Eq. 1, in NNPDF1.1 both s+(x,Q20) and s−(x,Q 2 0) are parametrized with independent neural networks. The architecture is the same as in [6], so that each PDF is described by 37 free parameters. The s − (x) distribution is forced to satisfy the strange valence sum rule following the method of [6]. These strange PDFs are mostly constrained in our analysis by the CHORUS data as well as by the HERA CC data. Another improvement in NNPDF1.1 with respect to NNPDF1.0 is a randomization of the preprocessing exponents, which were kept fixed at their optimal values in [6]. In the present analysis for each replica the PDF preprocessing exponents are allowed to vary at random within a given range, which is given in Table 1. This range is determined as the range in which variations of the preprocessing exponents produce no deterioration of the fit quality, see Table 11 in [6]. In Fig. 1 we show the results from the NNPDF1.1 analysis for the Σ(x), g(x),s+(x) and s − (x) distributions compared to other PDF sets, including NNPDF1.0. First of all, we observe that the central values for both PDFs are reasonably close between NNPDF1.0 and NNPDF1.1, thus ensuring the validity of the flavour assumptions in the former case. Second, we see that the uncertainties in s+(x) are large, so that all other PDF sets are included within the NNPDF1.1 PDF m n Σ(x, Q0) [2.7, 3.3] [1.1, 1.3] g(x,Q0) [3.7, 4.3] [1.1, 1.3] T3(x,Q 2 0) [2.7, 3.3] [0.1, 0.4] VT (x,Q 2 0) [2.7, 3.3] [0.1, 0.4] ∆S(x, Q 2 0) [2.7, 3.3] [0, 0.01] s+(x, Q 2 0) [2.7, 3.3] [1.1, 1.3] s−(x, Q 2 0) [2.7, 3.3] [0.1, 0.4] Table 1: The range of variation of the preprocessing exponents used in NNPDF1.1. error band, which turns out to be much larger than for NNPDF1.0, since there the strange sea was fixed by Eq. 1. The situation for the strange valence PDF s − (x) is similar: it turns out to be completely unconstrained from the present data set (see Fig. 1), with central value compatible with zero. x -5 10 -4 10 -3 10 -2 10 -1 10 1 ) 0 2
Physics Letters B | 2010
Giovanni Diana; Juan Rojo; Richard D. Ball
Abstract Direct photon production is an important process at hadron colliders, being relevant both for precision measurement of the gluon density, and as background to Higgs and other new physics searches. Here we explore the implications of recently derived results for high-energy resummation of direct photon production for the interpretation of measurements at the Tevatron and the LHC. The effects of resummation are compared to various sources of theoretical uncertainties like PDFs and scale variations. We show how the high-energy resummation procedure stabilizes the logarithmic enhancement of the cross section at high energy which is present at any fixed order in the perturbative expansion starting at NNLO. The effects of high-energy resummation are found to be negligible at Tevatron, while they enhance the cross section by a few percent for p T ≲ 10 GeV at the LHC. Our results imply that the discrepancy at small p T between fixed order NLO and Tevatron data cannot be explained by unresummed high-energy contributions.
Nuclear Physics | 2009
Richard D. Ball; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Andrea Piccione; Juan Rojo; Maria Ubiali