Michele Scagliarini
University of Bologna
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michele Scagliarini.
Quality and Reliability Engineering International | 2006
Silvano Bordignon; Michele Scagliarini
In this paper we study the properties of the estimator of Cpm when the observations are affected by measurement errors. We compare the performances of the estimator in the error case with those of the estimator in the error-free case. The results indicate that the presence of measurement errors in the data leads to different behavior of the estimator according to the entity of the error variability. We finally show how to use our results in practice. Copyright
Communications in Statistics-theory and Methods | 2002
Michele Scagliarini
ABSTRACT The properties of the estimator of C p for autocorrelated data in presence of measurement errors are discussed. This work is motivated by the fact that while some efforts have been dedicated in the literature to the statistical properties of the capability index estimator when the data are autocorrelated, scarce attention has been given to the evaluation of these properties when sample data are affected by measurement errors. In this paper, for a first order stationary autoregressive process, the performances of the estimator of C p in the case of measurement errors are derived and compared with those obtained in the error free case.
Environmetrics | 2000
Silvano Bordignon; Michele Scagliarini
The quality of data collected by air pollution monitoring networks is often affected by inaccuracies and missing data problems, mainly due to breakdowns and/or biases of the measurement instruments. In this paper we propose a statistical method to detect, as soon as possible, biases in the measurement devices, in order to improve the quality of collected data on line. The technique is based on the joint use of stochastic modelling and statistical process control algorithms. This methodology is applied to the mean hourly ozone concentrations recorded from one monitoring site of the Bologna urban area network. We set up the monitoring algorithm through Monte Carlo simulations in such a way to detect anomalies in the data within a reasonable delay. The results show several out of control signals that may be caused by problems in the measurement device.
The American Journal of Gastroenterology | 2016
Nicola de Bortoli; Leonardo Frazzoni; Edoardo Savarino; Marzio Frazzoni; Irene Martinucci; Aleksandra Jania; Salvatore Tolone; Michele Scagliarini; M. Bellini; Elisa Marabotto; Manuele Furnari; Giorgia Bodini; Salvatore Russo; Lorenzo Bertani; Veronica Natali; Lorenzo Fuccio; Vincenzo Savarino; Corrado Blandizzi; Santino Marchi
Objectives:We aimed to evaluate the prevalence of irritable bowel syndrome (IBS) in patients with typical reflux symptoms as distinguished into gastroesophageal reflux disease (GERD), hypersensitive esophagus (HE), and functional heartburn (FH) by means of endoscopy and multichannel intraluminal impedance (MII)-pH monitoring. The secondary aim was to detect pathophysiological and clinical differences between different sub-groups of patients with heartburn.Methods:Patients underwent a structured interview based on questionnaires for GERD, IBS, anxiety, and depression. Off-therapy upper-gastrointestinal (GI) endoscopy and 24 h MII-pH monitoring were performed in all cases. In patients with IBS, fecal calprotectin was measured and colonoscopy was scheduled for values >100 mg/kg to exclude organic disease. Multivariate logistic regression analysis was performed to identify independent risk factors for FH.Results:Of the 697 consecutive heartburn patients who entered the study, 454 (65%) had reflux-related heartburn (GERD+HE), whereas 243 (35%) had FH. IBS was found in 147/454 (33%) GERD/HE but in 187/243 (77%) FH patients (P<0.001). At multivariate analysis, IBS and anxiety were independent risk factors for FH in comparison with reflux-related heartburn (GERD+HE).Conclusions:IBS overlaps more frequently with FH than with GERD and HE, suggesting common pathways and treatment. HE showed intermediate characteristic between GERD and FH.
European Journal of Gastroenterology & Hepatology | 2015
Eleonora Scaioli; Michele Scagliarini; Carla Cardamone; Elisa Liverani; Giampaolo Ugolini; Davide Festi; Franco Bazzoli; Andrea Belluzzi
Objective Faecal calprotectin (FC) is the most relevant noninvasive biomarker for monitoring inflammatory status, response to treatment and for predicting clinical relapse in ulcerative colitis (UC). The aim of this study was to evaluate the role of FC in predicting both clinical/endoscopic activity and clinical relapse in a large UC patient cohort. Patients and methods A two-phase prospective study was carried out. In the first phase, the relationship between FC and clinical/endoscopic activity was evaluated. In the second phase, a cohort of asymptomatic patients with endoscopic mucosal healing was followed up using clinical and FC level determinations. Results One hundred and twenty-one UC patients were enrolled. The FC concentrations were directly correlated with both clinical and endoscopic activity (r=0.76 and 0.87, respectively, P<0.05) and were capable of differentiating between different degrees of endoscopic severity (P<0.01). An FC cut-off value of 110 &mgr;g/g was highly predictive (95%) of endoscopic activity. Seventy-four patients in clinical remission with mucosal healing were followed up for a year or until relapse and 27% developed a clinical relapse. The FC concentration of nonrelapsed patients (48 &mgr;g/g) versus relapsed patients (218 &mgr;g/g) was significantly different (P<0.01). An FC cut-off value of 193 &mgr;g/g had an accuracy of 89% in predicting clinical relapse. High FC levels were associated with clinical relapse using survival analysis and multivariate analysis. Conclusion Our data strongly support the use of FC for staging the activity of disease, predicting relapse and leading decision-making in a UC setting.
Statistical Methods and Applications | 2001
Silvano Bordignon; Michele Scagliarini
Process capability indices (PCIs) have been widely used in manufacturing industries to previde a quantitative measure of process potential and performance. While some efforts have been dedicated in the literature to the statistical properties of PCIs estimators, scarce attention has been given to the evaluation of these properties when sample data are affected by measurement errors. In this work we deal with the problem of measurement errors effects on the performance of PCIs. The analysis is illustrated with reference toCp, i.e. the simplest and most common measure suggested to evaluate process capability.
Quality and Reliability Engineering International | 2015
Michele Scagliarini
Important features of multivariate process capability indices are comparability, interpretability and ease of implementation. When poor process capability is indicated by an index, the user should determine why the process is incapable (e.g. excessive variability or off-target process mean). One of the most used multivariate process capability indices is MCpm because it provides assessments of process precision and accuracy. In this work, we study and discuss a peculiarity of MCpm: processes that are equivalent in terms of precision, accuracy and MCpm index, after the occurrence of the same increase in the process variability, can have different values of the index. Because MCpm is often used for comparing processes, this behaviour may cause comparability difficulties. Therefore, we suggest how to take into account this specific behaviour for avoiding erroneous conclusions. Copyright
Journal of Applied Statistics | 2010
Michele Scagliarini
The present paper examines the properties of the C pk estimator when observations are autocorrelated and affected by measurement errors. The underlying reason for this choice of subject matter is that in industrial applications, process data are often autocorrelated, especially when sampling frequency is not particularly low, and even with the most advanced measuring instruments, gauge imprecision needs to be taken into consideration. In the case of a first-order stationary autoregressive process, we compare the statistical properties of the estimator in the error case with those of the estimator in the error-free case. Results indicate that the presence of gauge measurement errors leads the estimator to behave differently depending on the entity of error variability.
Quality and Reliability Engineering International | 2015
Michele Scagliarini
Multivariate measurement systems analysis (MSA) is usually performed by designing suitable gauge repeatability and reproducibility (R&R) experiments, ignoring available data generated by the measurement system while used for inspection or process control. This article proposes an approach that, by using the data that are routinely available from the regular operation of the instrument, allows the measurement instruments current precision to be compared against a benchmark. The proposed method may be appropriately used in an integrated and coordinated manner with the usual multivariate gauge study in the sense that it can be used to assess the stability or a possible deterioration in the precision of the measurement instrument while operational. Therefore, the complementary use of the proposed approach and the traditional multivariate gauge R&R studies can be a useful strategy for improving the overall quality of multivariate measurement systems. Furthermore, because it can be implemented at almost no additional cost, it may be effective in reducing the costs of a multivariate MSA performed with a certain frequency. Copyright
Quality and Reliability Engineering International | 2018
Michele Scagliarini
In this study we propose a sequential procedure for hypothesis testing on the pk C process capability index. We compare the properties of the sequential test with the performances of non-sequential tests by performing an extensive simulation study. The results indicate that the proposed sequential procedure makes it possible to save a large amount of sample size, which can be translated into reduced costs, time and resources.