Andrea Piccione
University of Milan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrea Piccione.
Nuclear Physics | 2009
Richard D. Ball; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Andrea Piccione; Juan Rojo; Maria Ubiali
Abstract We use recent neutrino dimuon production data combined with a global deep-inelastic parton fit to construct a new parton set, NNPDF1.2, which includes a determination of the strange and antistrange distributions of the nucleon. The result is characterized by a faithful estimation of uncertainties thanks to the use of the NNPDF methodology, and is free of model or theoretical assumptions other than the use of NLO perturbative QCD and exact sum rules. Better control of the uncertainties of the strange and antistrange parton distributions allows us to reassess the determination of electroweak parameters from the NuTeV dimuon data. We perform a direct determination of the | V cd | and | V cs | CKM matrix elements, obtaining central values in agreement with the current global CKM fit: specifically we find | V cd | = 0.244 ± 0.019 and | V cs | = 0.96 ± 0.07 . Our result for | V cs | is more precise than any previous direct determination. We also reassess the uncertainty on the NuTeV determination of sin 2 θ W through the Paschos–Wolfenstein relation: we find that the very large uncertainties in the strange valence momentum fraction are sufficient to bring the NuTeV result into complete agreement with the results from precision electroweak data.
Journal of High Energy Physics | 2002
Stefano Forte; Lluí s Garrido; Jose I. Latorre; Andrea Piccione
We construct a parametrization of deep-inelastic structure functions which retains information on experimental errors and correlations, and which does not introduce any theoretical bias while interpolating between existing data points. We generate a Monte Carlo sample of pseudo-data configurations and we train an ensemble of neural networks on them. This effectively provides us with a probability measure in the space of structure functions, within the whole kinematic region where data are available. This measure can then be used to determine the value of the structure function, its error, point-to-point correlations and generally the value and uncertainty of any function of the structure function itself. We apply this technique to the determination of the structure function F2 of the proton and deuteron, and a precision determination of the isotriplet combination F2[p?d]. We discuss in detail these results, check their stability and accuracy, and make them available in various formats for applications.
Journal of High Energy Physics | 2007
Luigi Del Debbio; Stefano Forte; Jose I. Latorre; Andrea Piccione; J. Rojo
We provide a determination of the isotriplet quark distribution from available deep–inelastic data using neural networks. We give a general introduction to the neural network approach to parton distributions, which provides a solution to the problem of constructing a faithful and unbiased probability distribution of parton densities based on available experimental information. We discuss in detail the techniques which are necessary in order to construct a Monte Carlo representation of the data, to construct and evolve neural parton distributions, and to train them in such a way that the correct statistical features of the data are reproduced. We present the results of the application of this method to the determination of the nonsinglet quark distribution up to next–to–next–to–leading order, and compare them with those obtained using other approaches.
Nuclear Physics | 1998
Andrea Piccione; Giovanni Ridolfi
Abstract We present a computation of nucleon mass corrections to nucleon structure functions for polarized deep-inelastic scattering. We perform a fit to existing data including mass corrections at first order in m 2 / Q 2 and we study the effect of these corrections on physically interesting quantities. We conclude that mass corrections are generally small, and compatible with current estimates of higher twist uncertainties, when available.
Journal of High Energy Physics | 2005
Luigi Del Debbio; Stefano Forte; Jose I. Latorre; Andrea Piccione; J. Rojo
We construct a parametrization of the deep-inelastic structure function of the proton F2(x,Q2) based on all available experimental information from charged lepton deep-inelastic scattering experiments. The parametrization effectively provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. The result is obtained in the form of a Monte Carlo sample of neural networks trained on an ensemble of replicas of the experimental data. We discuss in detail the techniques required for the construction of bias-free parameterizations of large amounts of structure function data, in view of future applications to the determination of parton distributions based on the same method.
arXiv: High Energy Physics - Phenomenology | 2009
Juan Rojo; Richard D. Ball; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Andrea Piccione; Maria Ubiali
We present recent progress within the NNPDF parton analysis framework. After a brief review of the results from the DIS NNPDF analysis, NNPDF1.0, we discuss results from an updated analysis with independent parametrizations for the strange and anti-strange distributions, denoted by NNPDF1.1. We examine the phenomenological implications of this improved analysis for the strange PDFs. Introduction PDFs and their associated uncertainties will play a crucial role in the full exploitation of the LHC physics potential. However, it is known that the standard approach to PDF determination [1, 2] suffers from several drawbacks, mainly related to the lack of control on the bias introduced in the choices of specific PDF parametrizations and flavour assumptions, as well as to the difficulty in providing a consistent statistical interpretation of PDFs uncertainties in the presence of incompatible data. Motivated by this situation, a novel method has been introduced which combines a MC sampling of experimental data with neural networks as unbiased interpolators for the PDF parametrization. This method, proposed by the NNPDF Collaboration, was first successfully applied to the parametrization of DIS structure functions [3, 4] and more recently to the determination of PDFs [5,6]. In this contribution we present recent results within this NNPDF analysis framework. The NNPDF1.0 analysis NNPDF1.0 [6] was the first DIS PDF analysis from the NNPDF Collaboration. The experimental dataset used in the NNPDF1.0 analysis consists of all relevant fixed target and collider deep-inelastic scattering data: structure functions from NMC, SLAC and BCDMS, CC and NC reduced cross-sections from HERA, direct FL(x,Q) from H1 and neutrino CC reduced cross sections from CHORUS. In NNPDF1.0, five PDFs are parametrized with neural networks at the initial evolution scale, which is taken to be Q20 = m 2 c = 2 GeV : Σ(x,Q20), V (x,Q 2 0) ≡ (uv+dv+sv)(x,Q 2 0), T3(x,Q 2 0) ≡ (u+ ū− d− d)(x,Q 2 0), ∆S(x,Q 2 0) ≡ (d− ū)(x,Q 2 0), and g(x,Q 2 0). The strange distributions are fixed by the additional assumption: s(x,Q20) = s(x,Q 2 0) = CS/2 ( ū(x,Q20) + d(x,Q 2 0) ) . (1) The fraction of (symmetric) strange over non-strange sea is taken to be CS = 0.5, as suggested by di-muon data. While recent analysis (see [7] and references therein) suggest a somewhat smaller central value, Eq. 1 is a very crude approximation and therefore uncertainties in CS are expected to be rather large, as the new NNPDF1.1 analysis confirms below. The overall normalization of g(x),∆S(x) and g(x) is fixed by imposing the momentum and valence sum rules. The NNPDF NLO evolution program employs a hybrid N-space and xspace method [5], whose accuracy has been checked with the Les Houches benchmark tables [8], obtained from a comparison of the HOPPET [9] and PEGASUS [10] evolution programs. The NNPDF1.0 gluon and singlet PDFs are shown in Fig. 1, compared with the results of other sets. We observe that our analysis produces results consistent with those obtained by other collaborations [1, 2] while our error bands tend to get bigger in the region where data do not constrain PDFs behavior. Interestingly, this happens without any error blow-up from the use of large tolerance factors [1, 2] in the PDF error definition. The NNPDF1.1 analysis NNPDF1.1 is an update of the previously described NNPDF1.0 analysis which introduces independent parametrizations in the strange PDF sector and a randomization of the preprocessing. The motivations for this update are twofold. First of all, the stability analysis of [6], were the preprocessing exponents were varied their optimal values, indicated that uncertainties might have been slightly underestimated for some PDFs in some restricted x−regions, like for example the valence PDF in the large-x region. On the other hand, the restrictive assumptions on the strange distributions Eq. 1 should also lead to an uncertainty underestimation for some PDFs and some observables, especially those directly sensitive to the strange sector. Instead of the simplified assumptions in Eq. 1, in NNPDF1.1 both s+(x,Q20) and s−(x,Q 2 0) are parametrized with independent neural networks. The architecture is the same as in [6], so that each PDF is described by 37 free parameters. The s − (x) distribution is forced to satisfy the strange valence sum rule following the method of [6]. These strange PDFs are mostly constrained in our analysis by the CHORUS data as well as by the HERA CC data. Another improvement in NNPDF1.1 with respect to NNPDF1.0 is a randomization of the preprocessing exponents, which were kept fixed at their optimal values in [6]. In the present analysis for each replica the PDF preprocessing exponents are allowed to vary at random within a given range, which is given in Table 1. This range is determined as the range in which variations of the preprocessing exponents produce no deterioration of the fit quality, see Table 11 in [6]. In Fig. 1 we show the results from the NNPDF1.1 analysis for the Σ(x), g(x),s+(x) and s − (x) distributions compared to other PDF sets, including NNPDF1.0. First of all, we observe that the central values for both PDFs are reasonably close between NNPDF1.0 and NNPDF1.1, thus ensuring the validity of the flavour assumptions in the former case. Second, we see that the uncertainties in s+(x) are large, so that all other PDF sets are included within the NNPDF1.1 PDF m n Σ(x, Q0) [2.7, 3.3] [1.1, 1.3] g(x,Q0) [3.7, 4.3] [1.1, 1.3] T3(x,Q 2 0) [2.7, 3.3] [0.1, 0.4] VT (x,Q 2 0) [2.7, 3.3] [0.1, 0.4] ∆S(x, Q 2 0) [2.7, 3.3] [0, 0.01] s+(x, Q 2 0) [2.7, 3.3] [1.1, 1.3] s−(x, Q 2 0) [2.7, 3.3] [0.1, 0.4] Table 1: The range of variation of the preprocessing exponents used in NNPDF1.1. error band, which turns out to be much larger than for NNPDF1.0, since there the strange sea was fixed by Eq. 1. The situation for the strange valence PDF s − (x) is similar: it turns out to be completely unconstrained from the present data set (see Fig. 1), with central value compatible with zero. x -5 10 -4 10 -3 10 -2 10 -1 10 1 ) 0 2
Nuclear Physics | 2001
Stefano Forte; Lorenzo Magnea; Andrea Piccione; Giovanni Ridolfi
Abstract We define truncated Mellin moments of parton distributions by restricting the integration range over the Bjorken variable to the experimentally accessible subset x 0 ⩽x⩽1 of the allowed kinematic range 0⩽x⩽1 . We derive the evolution equations satisfied by truncated moments in the general (singlet) case in terms of an infinite triangular matrix of anomalous dimensions which couple each truncated moment to all higher moments with orders differing by integers. We show that the evolution of any moment can be determined to arbitrarily good accuracy by truncating the system of coupled moments to a sufficiently large but finite size, and show how the equations can be solved in a way suitable for numerical applications. We discuss in detail the accuracy of the method in view of applications to precision phenomenology.
Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2006
Andrea Piccione; Luigi Del Debbio; Stefano Forte; Jose I. Latorre; J. Rojo
We will show an application of neural networks to extract informations on the structure of hadrons. A Monte Carlo over experimental data is performed to correctly reproduce data errors and correlations. A neural network is then trained on each Monte Carlo replica via a genetic algorithm. Results on the proton and deuteron structure functions and on the nonsinglet parton distribution will be shown.
arXiv: High Energy Physics - Phenomenology | 2005
J. Rojo; Luigi Del Debbio; Stefano Forte; Jose I. Latorre; Andrea Piccione
We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep‐inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits.
Journal of High Energy Physics | 2009
L. Del Debbio; Alberto Guffanti; Andrea Piccione
Determinations of structure functions and parton distribution functions have been recently obtained using Monte Carlo methods and neural networks as universal, unbiased interpolants for the unknown functional dependence. In this work the same methods are applied to obtain a parametrization of polarized Deep Inelastic Scattering (DIS) structure functions. The Monte Carlo approach provides a bias-free determination of the probability measure in the space of structure functions, while retaining all the information on experimental errors and correlations. In particular the error on the data is propagated into an error on the structure functions that has a clear statistical meaning. We present the application of this method to the parametrization from polarized DIS data of the photon asymmetries A1p and A1d from which we determine the structure functions g1p(x,Q2) and g1d(x,Q2), and discuss the possibility to extract physical parameters from these parametrizations. This work can be used as a starting point for the determination of polarized parton distributions.