R. Tripiccione
University of Ferrara
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by R. Tripiccione.
international symposium on physical design | 1996
Roberto Benzi; Luca Biferale; Sergio Ciliberto; M. V. Struglia; R. Tripiccione
In this paper we report numerical and experimental results on the scaling properties of the velocity turbulent fields in several flows. The limits of a new form of scaling, named Extended Self-Similarity (ESS), are discussed. We show that, when a mean shear is absent, the self-scaling exponents are universal and they do not depend on the specific flow (3D homogeneous turbulence, thermal convection, MHD). In contrast, ESS is not observed when a strong shear is present. We propose a generalized version of self-scaling which extends down to the smallest resolvable scales even in cases where ESS is not present. This new scaling is checked in several laboratory and numerical experiments. A possible theoretical interpretation is also proposed. A synthetic turbulent signal having most of the properties of a real one has been generated.
International Journal of Modern Physics C | 1993
A. Bartoloni; C. Battista; S. Cabasino; P.S. Paolucci; J. Pech; R. Sarno; G.M. Todesco; M. Torelli; W. Tross; P. Vicini; R. Benzi; N. Cabibbo; F. Massaioli; R. Tripiccione
In this paper we describe an implementation of the Lattice Boltzmann Equation method for fluid-dynamics simulations on the APE100 parallel computer. We have performed a simulation of a two-dimensional Rayleigh-Benard convection cell. We have tested the theory proposed by Shraiman and Siggia for the scaling of the Nusselt number vs. Rayleigh number.
ieee nuclear science symposium | 2005
A. Annovi; A. Bardi; M. Bitossi; S. Chiozzi; C. Damiani; Mauro Dell'Orso; P. Giannetti; P. Giovacchini; G. Marchiori; I. Pedron; M. Piendibene; L. Sartori; F. Schifano; F. Spinella; S. Torre; R. Tripiccione
We describe a VLSI processor for pattern recognition based on content addressable memory (CAM) architecture, optimized for on-line track finding in high-energy physics experiments. A large CAM bank stores all trajectories of interest and extracts the ones compatible with a given event. This task is naturally parallelized by a CAM architecture able to output identified trajectories, recognized among 2/sup 96/ possible combinations, in just a few 40 MHz clock cycles. We have developed this device (called the AMchip03 processor) for the silicon vertex trigger (SVT) upgrade at CDF using a standard-cell VLSI design methodology. This approach provides excellent pattern density, while sparing many of the complexities and risks associated to a full-custom design. On the other hand, the cost performance ratio is well more than one order of magnitude better than an FPGA-based design. This processor has a flexible and easily configurable structure that makes it suitable for applications in other experimental environments. We look forward to share this technology.
Computing in Science and Engineering | 2009
F. Belletti; M. Cotallo; A. Cruz; L. A. Fernandez; A. Gordillo-Guerrero; M. Guidetti; A. Maiorano; F. Mantovani; Enzo Marinari; V. Martin-Mayor; A. Muoz-Sudupe; D. Navarro; Giorgio Parisi; S. Perez-Gaviro; Mauro Rossi; J. J. Ruiz-Lorenzo; Sebastiano Fabio Schifano; D. Sciretti; A. Tarancón; R. Tripiccione; J.L. Velasco; D. Yllanes; Gianpaolo Zanier
Janus is a modular, massively parallel, and reconfigurable FPGA-based computing system. Each Janus module has one computational core and one host. Janus is tailored to, but not limited to, the needs of a class of hard scientific applications characterized by regular code structure, unconventional data-manipulation requirements, and a few Megabits database. The authors discuss this configurable systems architecture and focus on its use for Monte Carlo simulations of statistical mechanics, as Janus performs impressively on this class of application.
Physics of Fluids | 2005
Enrico Calzavarini; Detlef Lohse; Federico Toschi; R. Tripiccione
The Ra and Pr number scaling of the Nusselt number Nu, the Reynolds number Re, the temperature fluctuations, and the kinetic and thermal dissipation rates is studied for (numerical) homogeneous Rayleigh–Benard turbulence, i.e., Rayleigh–Benard turbulence with periodic boundary conditions in all directions and a volume forcing of the temperature field by a mean gradient. This system serves as model system for the bulk of Rayleigh–Benard flow and therefore as model for the so-called “ultimate regime of thermal convection.” With respect to the Ra dependence of Nu and Re we confirm our earlier results [ D. Lohse and F. Toschi, “The ultimate state of thermal convection,” Phys. Rev. Lett. 90, 034502 (2003) ] which are consistent with the Kraichnan theory [ R. H. Kraichnan, “Turbulent thermal convection at arbitrary Prandtl number,” Phys. Fluids 5, 1374 (1962) ] and the Grossmann–Lohse (GL) theory [ S. Grossmann and D. Lohse, “Scaling in thermal convection: A unifying view,” J. Fluid Mech. 407, 27 (2000) ; “Thermal convection for large Prandtl number,” Phys. Rev. Lett. 86, 3316 (2001) ; “Prandtl and Rayleigh number dependence of the Reynolds number in turbulent thermal convection,” Phys. Rev. E 66, 016305 (2002) ; “Fluctuations in turbulent Rayleigh–Benard convection: The role of plumes,” Phys. Fluids 16, 4462 (2004) ], which both predict Nu ∼ Ra1/2 and Re ∼ Ra1/2. However the Pr dependence within these two theories is different. Here we show that the numerical data are consistent with the GL theory Nu ∼ Pr1/2, Re ∼ Pr−1/2. For the thermal and kinetic dissipation rates we find ϵθ/(κΔ2L−2) ∼ (Re Pr)0.87 and ϵu/(ν3L−4) ∼ Re2.77, both near (but not fully consistent) the bulk dominated behavior, whereas the temperature fluctuations do not depend on Ra and Pr. Finally, the dynamics of the heat transport is studied and put into the context of a recent theoretical finding by Doering et al. [“Comment on ultimate state of thermal convection” (private communication)].
Nuclear Physics | 1989
P. Bacilieri; L. Fonti; E. Remiddi; G.M. Todesco; M Bernaschi; S. Cabasino; N. Cabibbo; L.A. Fernández; Enzo Marinari; P. Paolucci; Giorgio Parisi; G. Salina; A. Tarancón; F. Coppola; Maria Paola Lombardo; E. Simeone; R. Tripiccione; G. Fiorentini; A. Lai; P. A. Marchesini; F. Marzano; F. Rapuano; W. Tross; R. W. Rusack
Abstract We compute the QCD hadronic mass spectrum in quenched lattice QCD at β=5.7 with small statistical and systematic errors by using the APE computer.
International Journal of Modern Physics C | 1993
A. Bartoloni; G. Bastianello; C. Battista; S. Cabasino; F. Marzano; P.S. Paolucci; J. Pech; F. Rapuano; Emanuele Panizzi; R. Sarno; G.M. Todesco; M. Torelli; W. Tross; P. Vicini; N. Cabibbo; A. Fucci; R. Tripiccione
APE100 processors are based on a simple Single Instruction Multiple Data architecture optimized for the simulation of Lattice Field Theories or other complex physical systems. This paper describes the hardware implementation of the first APE100 machine.
Physical Review E | 2002
Enrico Calzavarini; Federico Toschi; R. Tripiccione
We present different results from high-resolution high-statistics direct numerical simulations of a three-dimensional convective cell. We test the fundamental physical picture of the presence of both a Bolgiano-Obhukhov-like and a Kolmogorov-like regime. We find that the dimensional predictions for these two distinct regimes (characterized, respectively, by an active and passive role of the temperature field) are consistent with our analysis.
Computing in Science and Engineering | 2006
F. Belletti; Sebastiano Fabio Schifano; R. Tripiccione; François Bodin; Ph. Boucaud; J. Micheli; O. Pene; N. Cabibbo; S. de Luca; A. Lonardo; D Rossetti; P. Vicini; M. Lukyanov; L. Morin; N. Paschedag; H. Simma; V. Morenas; Dirk Pleiter; F. Rapuano
apeNEXT is the latest in the APE collaborations series of parallel computers for computationally intensive calculations such as quantum chromo dynamics on the lattice. The authors describe the computer architectural choices that have been shaped by almost two decades of collaboration activity.
International Journal of High Speed Computing | 1993
C. Battista; S. Cabasino; F. Marzano; Pier Stanislao Paolucci; J. Pech; Federico Rapuano; R. Sarno; Gian Marco Todesco; Mario Torelli; W. Tross; P. Vicini; N. Cabibbo; Enzo Marinari; Giorgio Parisi; G. Salina; Filippo del Prete; Adriano Lai; Maria Paola Lombardo; R. Tripiccione; Adolfo Fucci
We describe APE-100, a SIMD, modular parallel processor architecture for large scale scientific computation. The largest configuration that will be implemented in the present design will deliver a peak speed of 100 Gflops. This performance is, for instance, required for high precision computations in Quantum Chromo Dynamics, for which APE-100 is very well suited.