Lorenzo Mossucca
Istituto Superiore Mario Boella
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lorenzo Mossucca.
ieee international conference on cloud computing technology and science | 2016
Ivana Zinno; Lorenzo Mossucca; S. Elefante; C. De Luca; Valentina Casola; Francesco Casu; R. Lanari
We present a case study on the migration to a Cloud Computing environment of the advanced differential synthetic aperture radar interferometry (DInSAR) technique, referred to as Small BAseline Subset (SBAS), which is widely used for the investigation of Earth surface deformation phenomena. In particular, we focus on the SBAS parallel algorithmic solution, namely P-SBAS, that allows the production of mean deformation velocity maps and the corresponding displacement time-series from a temporal sequence of radar images by exploiting distributed computing architectures. The Cloud migration is carried out by encapsulating the overall P-SBAS application in virtual machines running on the Cloud; moreover, the Cloud resources provisioning and configuration phases are implemented in an automatic way. Such an approach allows us to preserve the P-SBAS parallelization strategy and to straightforwardly evaluate its performance within a Cloud environment by comparing it with those achieved on a HPC in-house cluster. The results we present were achieved by using the Amazon Elastic Compute Cloud (EC2) of the Amazon Web Services (AWS) to process SAR datasets collected by the ENVISAT satellite and show that, thanks to the Cloud resources availability and flexibility, large DInSAR data volumes can be processed through the P-SBAS algorithm in short time frames and at reduced costs. As a case study, the mean deformation velocity map of the southern California area has been generated by processing 172 ENVISAT images. By exploiting 32 EC2 instances this processing took less than 17 hours to complete, with a cost of USD 850. Considering the available PB-scale archives of SAR data and the upcoming huge SAR data flow relevant to the recently launched (April 2014) Sentinel-1A and the forthcoming Sentinel-1B satellites, the exploitation of Cloud Computing solutions is particularly relevant because of the possibility to provide Cloud-based multi-user services allowing worldwide scientists to quickly process SAR data and to manage and access the achieved DInSAR results.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2015
Ivana Zinno; Stefano Elefante; Lorenzo Mossucca; Claudio De Luca; Michele Manunta; Riccardo Lanari; Francesco Casu
We present in this work a first performance assessment of the Parallel Small BAseline Subset (P-SBAS) algorithm, for the generation of Differential Synthetic Aperture Radar (SAR) Interferometry (DInSAR) deformation maps and time series, which has been migrated to a Cloud Computing (CC) environment. In particular, we investigate the scalable performances of the P-SBAS algorithm by processing a selected ENVISAT ASAR image time series, which we use as a benchmark, and by exploiting the Amazon Web Services (AWS) CC platform. The presented analysis shows a very good match between the theoretical and experimental P-SBAS performances achieved within the CC environment. Moreover, the obtained results demonstrate that the implemented P-SBAS Cloud migration is able to process ENVISAT SAR image time series in short times (less than 7 h) and at low costs (about USD 200). The P-SBAS Cloud scalable performances are also compared to those achieved by exploiting an in-house High Performance Computing (HPC) cluster, showing that nearly no overhead is introduced by the presented Cloud solution. As a further outcome, the performed analysis allows us to identify the major bottlenecks that can hamper the P-SBAS performances within a CC environment, in the perspective of processing very huge SAR data flows such as those coming from the existing COSMO-SkyMed or the upcoming SENTINEL-1 constellation. This work represents a relevant step toward the challenging Earth Observation scenario focused on the joint exploitation of advanced DInSAR techniques and CC environments for the massive processing of Big SAR Data.
Archive | 2012
Pietro Ruiu; Lorenzo Mossucca; Matteo Alessandro Francavilla; Francesca Vipiana
The accurate and efficient solution of Maxwells equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwells equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications.
ieee international forum on research and technologies for society and industry leveraging a better tomorrow | 2015
Simone Ciccia; Giorgio Giordanengo; Sergio Arianos; Flavio Renga; Pietro Ruiu; Alberto Scionti; Lorenzo Mossucca; Giuseppe Vecchi
Nowadays wireless applications are widespread, and the demand for smart antenna technology grows exponentially. Although there are a large variety of effective algorithms to control antennas, they lack in the requirements of the next generation smart devices for industrial and societal applications which demand integration in compact, low cost and low power architectures. In this work we present a Wi-Fi device coupled with an antenna, where the receiver is able to adapt to a changing signal environment by providing a constant and reliable connectivity. Design criterion follows a strong low power approach, single front-end USB powered connected to an embedded low energy consumption platform. In addition, we come up in a low cost solution that does not require any external hardware. Decoding and antenna control algorithm are software-defined, while antenna beam steering is obtained by means of varactor diodes, voltage biased, used in a suitable way through hybrid couplers in phase shift configuration.
International Journal of Applied Mathematics and Computer Science | 2011
Lorenzo Mossucca; Manuela Cucca; Riccardo Notarpietro
Data intensive scientific analysis with grid computing At the end of September 2009, a new Italian GPS receiver for radio occultation was launched from the Satish Dhawan Space Center (Sriharikota, India) on the Indian Remote Sensing OCEANSAT-2 satellite. The Italian Space Agency has established a set of Italian universities and research centers to implement the overall processing radio occultation chain. After a brief description of the adopted algorithms, which can be used to characterize the temperature, pressure and humidity, the contribution will focus on a method for automatic processing these data, based on the use of a distributed architecture. This paper aims at being a possible application of grid computing for scientific research.
international conference on high performance computing and simulation | 2010
Lorenzo Mossucca; M. Molinaro; Giovanni Emilio Perona; Manuela Cucca; Riccardo Notarpietro
In September 2009, the Indian Remote Sensing OCEANSAT-2 satellite was launched from Sriharikota(India). Moreover, OCEANSAT-2 carries on-board a third payload, called ROSA (Radio Occultation Sounder of the Atmospheric) and developed by the Italian Space Agency, which give very accurate information about the vertical structure of the atmosphere, through the exploitation of the GPS Radio Occultation technique. This is a remote sensing technique aiming to characterize the Earth atmosphere. The GPS signal observed when it emerges from the Earth limb, is refracted by the Earth atmosphere; through inversion techniques, it is possible to retrieve atmospheric refractivity profiles, which in turn may be used to obtain temperature, pressure and humidity profiles. Considering that there is a great deal of data to process, this paper presents an architecture solution based on Grid Computing. We want to focus the attention on how we managed the parallelization of the processing chain of Radio Occultation data in order to evaluate the gain in performances and time obtained using the distributed architecture and for a collaborative engineering approach.
complex, intelligent and software intensive systems | 2015
Lorenzo Mossucca; Ivana Zinno; S. Elefante; C. De Luca; Klodiana Goga; Francesco Casu; R. Lanari
Often scientific applications are characterized by complex workflows and large datasets to manage. Usually, these applications run in dedicated high performance computing centers with low-latency interconnections which require a consistent initial cost. Public and private cloud computing environments, thanks to their features such as customized computing environments, flexibility, and elasticity represent a valid alternative with respect to HPC clusters in order to minimize costs and optimize processing. In this paper the migration of an advanced Differential Synthetic Aperture Radar Interferometry (DInSAR) methodology for the investigation of Earth surface deformation phenomena to the Amazon Web Services (AWS) cloud computing environment is presented. Such a technique which is referred to as Parallel Small Baseline Subset (P-SBAS) algorithm allows producing mean deformation velocity maps and the corresponding displacement time-series from a temporal sequence of radar images. Moreover, an experimental analysis aimed at evaluating the P-SBAS algorithm parallel performances which are achieved within the AWS cloud by exploiting two different families of instances and by taking into account different I/O and network bandwidth configurations is presented.
complex, intelligent and software intensive systems | 2013
Lorenzo Mossucca; Luca Spogli; Giuseppe Caragnano; Vincenzo Romano; G. De Franceschi; Lucilla Alfonsi; Eleftherios Plakidis
The ionosphere is the single largest contributor to the GNSS (Global Navigation Satellite System) error budget and ionospheric scintillation (IS) in particular is one of its most harmful effects. The Ground Based Scintillation Climatology (GBSC) has been recently developed by INGV as a software tool to identify the main areas of the ionosphere in which IS is more likely to occur. Due to the high computational load required, GBSC is currently used only for scientific, offline, studies and not as a real time service. Recently, a collaboration was initiated between ISMB and INGV in order to identify which cloud service model (IaaS, PaaS or SaaS) is most suitable for implementing the GBSC technique within the cloud computing environment. The aims of this joined effort are twofold: i) to optimize the computational resources allocation strategy/plan for the GBSC service, ii) to fine tune the algorithm for dynamic and real time application, towards a service contributing to high precision professional applications for the GNSS-reliant business sectors. Preliminary result of the implementation of GBSC within the cloud environment will be shown.
cluster computing and the grid | 2012
Lorenzo Mossucca; Andrea Acquaviva; Francesco Abate; Rosalba Provenzano
Massive Parallel Sequencing is a term used to describe several revolutionary approaches to DNA sequencing, the so-called Next Generation Sequencing technologies. These technologies generate millions of short sequence fragments in a single run and can be used to measure levels of gene expression and to identify novel splice variants of genes allowing more accurate analysis. The proposed solution provides novelty on two fields, firstly an optimization of the read mapping algorithm has been designed, in order to parallelize processes, secondly an implementation of an architecture that consists of a Grid platform, composed of physical nodes, a Virtual platform, composed of virtual nodes set up on demand, and a scheduler that allows to integrate the two platforms.
complex, intelligent and software intensive systems | 2015
Giuseppe Caragnano; Lorenzo Mossucca
Services based on cloud computing technologies have become increasingly widespread and pervasive. In recent years the number of cloud provider has increased and there is a growing need to differentiate themselves in the market with more advanced services. The most common services concern cloud platforms provide storage on demand and ensure customer capacity and flexibility through innovative models. Therefore, the technologies for the management and storage of data is undergoing major changes. New methodologies such as Software-Defined Storage and Storage Orchestration are crucial to better address the design and development of cloud architectures in last generation data center. In this paper has conducted an analysis of the methods and software tools for storage management that promise to be challenge in the short term as they improve the technological aspects of the storage systems.