Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marco Comparato is active.

Publication


Featured researches published by Marco Comparato.


Publications of the Astronomical Society of the Pacific | 2007

Visualization, exploration, and data analysis of complex astrophysical data

Marco Comparato; Ugo Becciani; Alessandro Costa; Bjorn Larsson; Bianca Garilli; Claudio Gheller; John Taylor

In this paper, we show how advanced visualization tools can help the researcher in investigating and extracting information from data. The focus is on VisIVO, a novel open-source graphics application that blends high-performance multidimensional visualization techniques and up-to-date technologies to cooperate with other applications and to access remote, distributed data archives. VisIVO supports the standards defined by the International Virtual Observatory Alliance in order to make it interoperable with VO data repositories. The paper describes the basic technical details and features of the software, and it dedicates a large section to show how VisIVO can be used in several scientific cases.


Computer Physics Communications | 2007

FLY: MPI-2 high resolution code for LSS cosmological simulations

Ugo Becciani; Vincenzo Antonuccio-Delogu; Marco Comparato

Cosmological simulations of structures and galaxies formations have played a fundamental role in the study of the origin, formation and evolution of the Universe. These studies improved enormously with the use of supercomputers and parallel systems and, recently, grid based systems and Linux clusters. Now we present the new version of the tree N-body parallel code FLY that runs on a PC Linux Cluster using the one side communication paradigm MPI-2 and we show the performances obtained. FLY is included in the Computer Physics Communication Program Library. This new version was developed using the Linux Cluster of CINECA, an IBM Cluster with 1024 Intel Xeon Pentium IV 3.0 GHz. The results show that it is possible to run a 64 million particle simulation in less than 15 minutes for each time-step, and the code scalability with the number of processors is achieved. This leads us to propose FLY as a code to run very large N-body simulations with more than 109 particles with the higher resolution of a pure tree code. The FLY new version is available at the CPC Program Library, http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0.html [U. Becciani, M. Comparato, V. Antonuccio-Delogu, Comput Phys. Comm. 174 (2006) 605].


Publications of the Astronomical Society of the Pacific | 2008

The TVO Archive for Cosmological Simulations: Web Services and Architecture

Alessandro Costa; P. Manzato; Ugo Becciani; Marco Comparato; V. Costa; F. Gasparo; Claudio Gheller; A. Grillo; M. Molinaro; F. Pasian; G. Taffoni

In order to offer an intuitive but effective access to a growing number of cosmological simulations, we have developed the Italian Theoretical Virtual Observatory project (ITVO), as described by Pasian and colleagues in 2006. In this work we describe two Web portals as two ways to access and share complex data coming from numerical astrophysical simulations. We present a set of Web services aimed at offering services such as Simple Numeric Access (ProtocolSNAPSimple Numeric Access Protocol), as described by Gheller and colleagues in 2006, and Randomizers dealing with different data formats. The Web services technology allows us to run a particular task (a SNAP job, for instance) close to its data, avoiding an expensive data transfer.


Proceedings of the International Astronomical Union | 2006

VisIVO: an interoperable visualisation tool for Virtual Observatory data

Ugo Becciani; Marco Comparato; Alessandro Costa; Claudio Gheller; Bjorn Larsson; F. Pasian; Riccardo Smareglia

We present VisIVO a software for the visualisation and analysis of astrophysical data which can be retrieved from the Virtual Observatory framework and for cosmological simulations. VisIVO is VO standards compliant and supports the most important astronomical data formats such as FITS, HDF5 and VOTables. Data can be retrieved directly connecting to an available VO service (i.e., VizieR WS), loaded in the local computer memory where they can be further selected, visualised and manipulated.


Computer Physics Communications | 2006

FLY MPI-2 : a parallel tree code for LSS

Ugo Becciani; Marco Comparato; Vincenzo Antonuccio-Delogu

New version program summary Program title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queens University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(pos, size, real8, MPI_INFO_NULL, MPI_COMM_WORLD, win_pos, ierr) the following main window objects are created: • win_pos, win_vel, win_acc: particles positions velocities and accelerations, • win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card “C” Version and “D” Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors Elapsed time 16 2630.98 24 1790.89 32 1427.42 48 1015.41 64 822.64 Full-size table Table options View in workspace Download as CSV The table shows the elapsed time in seconds for each time step, running a simulation with 64 million particles in the Linux Cluster system.


Archive | 2007

Binding Applications Together with PLASTIC

John D. Taylor; Thomas Boch; Marco Comparato; Mark Taylor; Noel Winstanley; Robert G. Mann


Archive | 2006

VisIVO: a VO enabled tool for Scientific Visualization and Data Analysis .

Ugo Becciani; Marco Comparato; Claudio Gheller


Archive | 2008

An Archive and Tools for Cosmological Simulations inside the Virtual Observatory

R. W. Argyle; P. S. Bunclark; Patrizia Manzato; M. Molinaro; F. Gasparo; Riccardo Smareglia; Giuliano Taffoni; F. Pasian; Ugo Becciani; Alessandro Costa; V. Costa; A. F. Grillo; Marco Comparato


Archive | 2010

VO Compliant Visualization of Theoretical Data

M. Molinaro; Patrizia Manzato; F. Pasian; Ugo Becciani; Anna Helena Reali Costa; Paolo Massimino; A. F. Grillo; Marco Comparato; Santi Cassisi; A. Pietrinferni; C. Gheller; Riccardo Brunino


Archive | 2009

Visual Data Exploration

Marco Comparato; Ugo Becciani

Collaboration


Dive into the Marco Comparato's collaboration.

Researchain Logo
Decentralizing Knowledge