Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M Mezzanzanica is active.

Publication


Featured researches published by M Mezzanzanica.


Journal of Human Hypertension | 2009

Do socioeconomic disparities affect accessing and keeping antihypertensive drug therapy? Evidence from an Italian population-based study.

Giovanni Corrao; Antonella Zambon; Andrea Parodi; M Mezzanzanica; L Merlino; Giancarlo Cesana; Giuseppe Mancia

We conducted this population-based cohort study by linking several databases to explore the role of socioeconomic position for accessing and keeping antihypertensive drug therapy. A total of 71 469 patients, residents in the city of Milan (Italy) aged 40–80 years, who received an antihypertensive drug during 1999–2002 were followed for 1 year starting from the first dispensation. Socioeconomic position and drug prescriptions were respectively obtained from tax registry and outpatient prescription database. The effect of socioeconomic characteristics on standardized incidence rate (SIR) of new users of antihypertensive agents, odds ratio (OR) of using combined antihypertensive agents and non-antihypertensive drugs and hazard ratio (HR) of discontinuing antihypertensive therapy were estimated after adjustment for potential confounders. SIRs were 3.7 and 4.2 per 1000 person-months among persons at the lowest and intermediate income, respectively, and 2.4 and 3.0 among immigrants and Italians, respectively. Compared to persons at the highest income, those at the lowest income had increased chances of starting with combined antihypertensive drugs (OR: 1.1; 95% confidence intervals (CIs): 1.0, 1.2), and of using drugs for heart failure (OR:1.5; CIs:1.3, 1.6) and diabetes (OR: 1.7; CIs: 1.6, 1.9). Compared with Italians, non-western immigrants had increased chances of starting with combined antihypertensive agents (OR: 1.2; CIs: 1.0, 1.3), of using drugs for heart failure (OR: 1.2; CIs: 1.0, 1.4) and for diabetes (OR: 1.8; CIs: 1.6, 2.1), and of interrupting antihypertensive therapy (HR: 1.1; 95% CIs: 1.0, 1.2). Despite the universal health coverage of the Italian National Health Service (NHS), social disparities affect accessing and keeping antihypertensive therapy.


Information Processing and Management | 2015

A model-based evaluation of data quality activities in KDD

M Mezzanzanica; Roberto Boselli; Mirko Cesarini; Fabio Mercorio

Abstract We live in the Information Age, where most of the personal, business, and administrative data are collected and managed electronically. However, poor data quality may affect the effectiveness of knowledge discovery processes, thus making the development of the data improvement steps a significant concern. In this paper we propose the Multidimensional Robust Data Quality Analysis, a domain-independent technique aimed to improve data quality by evaluating the effectiveness of a black-box cleansing function. Here, the proposed approach has been realized through model checking techniques and then applied on a weakly structured dataset describing the working careers of millions of people. Our experimental outcomes show the effectiveness of our model-based approach for data quality as they provide a fine-grained analysis of both the source dataset and the cleansing procedures, enabling domain experts to identify the most relevant quality issues as well as the action points for improving the cleansing activities. Finally, an anonymized version of the dataset and the analysis results have been made publicly available to the community.


international conference on data technologies and applications | 2013

Automatic Synthesis of Data Cleansing Activities

M Mezzanzanica; Roberto Boselli; Mirko Cesarini; Fabio Mercorio

Data cleansing is growing in importance among both public and private organisations, mainly due to the relevant amount of data exploited for supporting decision making processes. This paper is aimed to show how model-based verification algorithms (namely, model checking) can contribute in addressing data cleansing issues, furthermore a new benchmark problem focusing on the labour market dynamic is introduced. The consistent evolution of the data is checked using a model defined on the basis of domain knowledge. Then, we formally introduce the concept of universal cleanser, i.e. an object which summarises the set of all cleansing actions for each feasible data inconsistency (according to a given consistency model), then providing an algorithm which synthesises it. The universal cleanser can be seen as a repository of corrective interventions useful to develop cleansing routines. We applied our approach to a dataset derived from the Italian labour market data, making the whole dataset and outcomes publicly available to the community, so that the results we present can be shared and compared with other techniques.


international conference on data technologies and applications | 2012

Data Quality Sensitivity Analysis on Aggregate Indicators

M Mezzanzanica; Roberto Boselli; Mirko Cesarini; Fabio Mercorio

Decision making activities stress data and information quality requirements. The quality of data sources is frequently very poor, therefore a cleansing process is required before using such data for decision making processes. When alternative (and more trusted) data sources are not available data can be cleansed only using business rules derived from domain knowledge. Business rules focus on fixing inconsistencies, but an inconsistency can be cleansed in different ways (i.e. the correction can be not deterministic), therefore the choice on how to cleanse data can (even strongly) affect the aggregate values computed for decision making purposes. The paper proposes a methodology exploiting Finite State Systems to quantitatively estimate how computed variables and indicators might be affected by the uncertainty related to low data quality, independently from the data cleansing methodology used. The methodology has been implemented and tested on a real case scenario providing effective results.


knowledge discovery and data mining | 2013

Inconsistency Knowledge Discovery for Longitudinal Data Management: A Model-Based Approach

Roberto Boselli; Mirko Cesarini; Fabio Mercorio; M Mezzanzanica

In the last years, the growing diffusion of IT-based services has given a rise to the use of huge masses of data. However, using data for analytical and decision making purposes requires to perform several tasks, e.g. data cleansing, data filtering, data aggregation and synthesis, etc. Tools and methodologies empowering people are required to appropriately manage the (high) complexity of large datasets.


knowledge discovery and data mining | 2014

A Policy-Based Cleansing and Integration Framework for Labour and Healthcare Data

Roberto Boselli; Mirko Cesarini; Fabio Mercorio; M Mezzanzanica

Large amounts of data are collected by public administrations and healthcare organizations, the integration of the data scattered in several information systems can facilitate the comprehension of complex scenarios and support the activities of decision makers.


intelligent data analysis | 2011

Data quality through model checking techniques

M Mezzanzanica; Roberto Boselli; Mirko Cesarini; Fabio Mercorio

The paper introduces the Robust Data Quality Analysis which exploits formal methods to support Data Quality Improvement Processes. The proposed methodology can be applied to data sources containing sequences of events that can be modelled by Finite State Systems. Consistency rules (derived from domain business rules) can be expressed by formal methods and can be automatically verified on data, both before and after the execution of cleansing activities. The assessment results can provide useful information to improve the data quality processes. The paper outlines the preliminary results of the methodology applied to a real case scenario: the cleansing of a very low quality database, containing the work careers of the inhabitants of an Italian province. The methodology has proved successful, by giving insights on the data quality levels and by providing suggestions on how to ameliorate the overall data quality process.


ieee international conference semantic computing | 2015

Challenge: Processing web texts for classifying job offers

Flora Amato; Roberto Boselli; Mirko Cesarini; Fabio Mercorio; M Mezzanzanica; Vincenzo Moscato; Fabio Persia; Antonio Picariello

Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.


Journal of Data and Information Quality | 2015

A Model-Based Approach for Developing Data Cleansing Solutions

M Mezzanzanica; Roberto Boselli; Mirko Cesarini; Fabio Mercorio

The data extracted from electronic archives is a valuable asset; however, the issue of the (poor) data quality should be addressed before performing data analysis and decision-making activities. Poor data quality is frequently cleansed using business rules derived from domain knowledge. Unfortunately, the process of designing and implementing cleansing activities based on business rules requires a relevant effort. In this article, we illustrate a model-based approach useful to perform inconsistency identification and corrective interventions, thus simplifying the process of developing cleansing activities. The article shows how the cleansing activities required to perform a sensitivity analysis can be easily developed using the proposed model-based approach. The sensitivity analysis provides insights on how the cleansing activities can affect the results of indicators computation. The approach has been successfully used on a database describing the working histories of an Italian area population. A model formalizing how data should evolve over time (i.e., a data consistency model) in such domain was created (by means of formal methods) and used to perform the cleansing and sensitivity analysis activities.


Engineering | 2016

Big Data Research in Italy: A Perspective

Sonia Bergamaschi; Emanuele Carlini; Michelangelo Ceci; Barbara Furletti; Fosca Giannotti; Donato Malerba; M Mezzanzanica; Anna Monreale; Gabriella Pasi; Dino Pedreschi; Raffele Perego; Salvatore Ruggieri

ABSTRACT The aim of this article is to synthetically describe the research projects that a selection of Italian universities is undertaking in the context of big data. Far from being exhaustive, this article has the objective of offering a sample of distinct applications that address the issue of managing huge amounts of data in Italy, collected in relation to diverse domains.

Collaboration


Dive into the M Mezzanzanica's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roberto Boselli

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paolo Mariani

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar

Antonio Picariello

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Gloria Ronzoni

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar

Vincenzo Moscato

University of Naples Federico II

View shared research outputs
Researchain Logo
Decentralizing Knowledge