Luisa Massari
University of Pavia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luisa Massari.
Performance Evaluation | 2000
Maria Carla Calzarossa; Luisa Massari; Daniele Tessera
The performance of any type of system cannot be determined without knowing the workload, that is, the requests being processed. Workload characterization consists of a description of the workload by means of quantitative parameters and functions; the objective is to derive a model able to show, capture, and reproduce the behavior of the workload and its most important features.
IEEE Parallel & Distributed Technology: Systems & Applications | 1995
Maria Carla Calzarossa; Luisa Massari; Alessandro Merio; Mario Pantano; Daniele Tessera
The Medea (MEasurements Description, Evaluation and Analysis) software tool provides a user-friendly environment for systematically applying workload characterization techniques to raw data produced by monitoring parallel programs. Medeas models are especially useful for program tuning and performance debugging, for testing alternative system configurations and for supporting benchmarking studies. >
ACM Computing Surveys | 2016
Maria Carla Calzarossa; Luisa Massari; Daniele Tessera
Workload characterization is a well-established discipline that plays a key role in many performance engineering studies. The large-scale social behavior inherent in the applications and services being deployed nowadays leads to rapid changes in workload intensity and characteristics and opens new challenging management and performance issues. A deep understanding of user behavior and workload properties and patterns is therefore compelling. This article presents a comprehensive survey of the state of the art of workload characterization by addressing its exploitation in some popular application domains. In particular, we focus on conventional web workloads as well as on the workloads associated with online social networks, video services, mobile apps, and cloud computing infrastructures. We discuss the peculiarities of these workloads and present the methodological approaches and modeling techniques applied for their characterization. The role of workload models in various scenarios (e.g., performance evaluation, capacity planning, content distribution, resource provisioning) is also analyzed.
parallel computing | 2004
Maria Carla Calzarossa; Luisa Massari; Daniele Tessera
Tuning and debugging the performance of parallel applications is an iterative process consisting of several steps dealing with identification and localization of inefficiencies, repair, and verification of the achieved performance. In this paper, we address the analysis of the performance of parallel applications from a methodological viewpoint with the aim of identifying and localizing inefficiencies. Our methodology is based on performance metrics and criteria that highlight the properties of the applications and the load imbalance and dissimilarities in the behavior of the processors. A few case studies illustrate the application of the methodology.
Archive | 2016
Maria Carla Calzarossa; Marco Luigi Della Vedova; Luisa Massari; Dana Petcu; Momin I. M. Tabash; Daniele Tessera
Despite the fast evolution of cloud computing, up to now the characterization of cloud workloads has received little attention. Nevertheless, a deep understanding of their properties and behavior is essential for an effective deployment of cloud technologies and for achieving the desired service levels. While the general principles applied to parallel and distributed systems are still valid, several peculiarities require the attention of both researchers and practitioners. The aim of this chapter is to highlight the most relevant characteristics of cloud workloads as well as identify and discuss the main issues related to their deployment and the gaps that need to be filled.
PERFORM'10 Proceedings of the 2010 IFIP WG 6.3/7.3 international conference on Performance Evaluation of Computer and Communication Systems: milestones and future challenges | 2010
Maria Carla Calzarossa; Luisa Massari
Web logs are an important source of information to describe and understand the traffic of the servers and its characteristics. The analysis of these logs is rather challenging because of the large volume of data and the complex relationships hidden in these data. Our investigation focuses on the analysis of the logs of two Web servers and identifies the main characteristics of their workload and the navigation profiles of crawlers and human users visiting the sites. The classification of these visitors has shown some interesting similarities and differences in term of traffic intensity and its temporal distribution. In general, crawlers tend to re-visit the sites rather often, even though they seldom send bursts of requests to reduce their impact on the servers resources. The other clients are also characterized by periodic patterns that can be effectively represented by few principal components.
ieee international conference on high performance computing data and analytics | 1998
Maria Carla Calzarossa; Luisa Massari; Alessandro P. Merlo; Mario Pantano; Daniele Tessera
The performance of HPF codes is influenced by the characteristics of the parallel system and by the efficiency of the compilation system. Performance analysis has to take into account all these aspects. We present the integration of a compilation system with a performance analysis tool aimed at the evaluation of HPF+ codes. The analysis is carried out at the source level. The “costs” of the parallelization strategies applied by the compiler are also captured such that a comprehensive view of the performance is provided.
ieee international conference on high performance computing data and analytics | 1996
Maria Carla Calzarossa; Luisa Massari; Alessandro P. Merlo; Daniele Tessera
The performance of parallel programs is influenced by the multiplicity of hardware and software components involved in their executions. Experimental approaches, where trace files collected at run-time by monitors are the basis of the analyses, allow a detailed evaluation of the performance. Quantitative as well as qualitative information related to the behavior of the programs are required. Medea is a parallel performance evaluation tool which provides various types of statistical and numerical techniques integrated with visualization facilities such that both quantitative and qualitative descriptions of the programs are obtained. A large variety of studies dealing with tuning, performance debugging, and code optimization profitably benefits of Medea.
advances in social networks analysis and mining | 2015
Derek Doran; Samir Yelne; Luisa Massari; Maria Carla Calzarossa; LaTrelle D. Jackson; Glen Moriarty
Internet and online-based social systems are rising as the dominant mode of communication in society. However, the public or semi-private environment under which most online communications operate under do not make them suitable channels for speaking with others about personal or emotional problems. This has led to the emergence of online platforms for emotional support offering free, anonymous, and confidential conversations with live listeners. Yet very little is known about the way these platforms are utilized, and if their features and design foster strong user engagement. This paper explores the utilization and the interaction features of hundreds of thousands of users on 7 Cups of Tea, a leading online platform offering online emotional support. It dissects the users activity levels, the patterns by which they engage in conversation with each other, and uses machine learning methods to find factors promoting engagement. The study may be the first to measure activities and interactions in a large-scale online social system that fosters peer-to-peer emotional support.
information integration and web-based applications & services | 2013
Maria Carla Calzarossa; Luisa Massari; Daniele Tessera
The traffic produced by the periodic crawling activities of Web robots often represents a good fraction of the overall websites traffic, thus causing some non-negligible effects on their performance. Our study focuses on the traffic generated on the SPEC website by many different Web robots, including, among the others, the robots employed by some popular search engines. This extensive investigation shows that the behavior and crawling patterns of the robots vary significantly in terms of requests, resources and clients involved in their crawling activities. Some robots tend to concentrate their requests in short periods of time and follow some sorts of deterministic patterns characterized by multiple peaks. The requests of other robots exhibit a time dependent behavior and repeated patterns with some periodicity. We represent the traffic as a time series modelled in the frequency domain. The identified models, consisting of trigonometric polynomials and Auto Regressive Moving Average components, accurately summarize the behavior of the overall traffic as well as the traffic of individual robots. These models can be easily used as a basis for forecasting.