Ruggero Bellio
University of Udine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ruggero Bellio.
Journal of Scheduling | 2012
Ruggero Bellio; Luca Di Gaspero; Andrea Schaerf
We propose a hybrid local search algorithm for the solution of the Curriculum-Based Course Timetabling Problem and we undertake a systematic statistical study of the relative influence of the relevant features on the performances of the algorithm. In particular, we apply modern statistical techniques for the design and analysis of experiments, such as nearly orthogonal space-filling Latin hypercubes and response surface methods. As a result of this analysis, our technique, properly tuned, compares favorably with the best known ones for this problem.
Computers & Operations Research | 2016
Ruggero Bellio; Sara Ceschia; Luca Di Gaspero; Andrea Schaerf; Tommaso Urli
We consider the university course timetabling problem, which is one of the most studied problems in educational timetabling. In particular, we focus our attention on the formulation known as the curriculum-based course timetabling problem, which has been tackled by many researchers and for which there are many available benchmarks. The contribution of this paper is twofold. First, we propose an effective and robust single-stage simulated annealing method for solving the problem. Secondly, we design and apply an extensive and statistically-principled methodology for the parameter tuning procedure. The outcome of this analysis is a methodology for modeling the relationship between search method parameters and instance features that allows us to set the parameters for unseen instances on the basis of a simple inspection of the instance itself. Using this methodology, our algorithm, despite its apparent simplicity, has been able to achieve high quality results on a set of popular benchmarks. A final contribution of the paper is a novel set of real-world instances, which could be used as a benchmark for future comparison.
Journal of Educational and Behavioral Statistics | 2011
Michela Battauz; Ruggero Bellio; Enrico Gori
This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error. Ignoring covariate measurement error leads to biased parameter estimates. To address this problem, a likelihood-based measurement error adjustment for multilevel models is proposed. In particular, the method deals with a covariate measured with error that has a random coefficient. An application to educational data from the Italian region of Lombardy illustrates the method. Manuscript received January 21, 2009 Revision received November 24, 2009 Accepted January 17, 2010
Experimental Physiology | 2014
Maria Pia Francescato; Valentina Cettolo; Ruggero Bellio
• What is the central question of this study? Is the accuracy and/or precision of the O2 uptake kinetics parameters, estimated by non‐linear regression, affected by different data treatments applied to reduce the noise of breath fluctuations? • What is the main finding and its importance? Simulations showed that, even after the averaging of more repetitions, the width of the asymptotic confidence intervals was narrower than the real ones, in particular when the O2 uptake responses were resampled at time intervals shorter than the average breath duration (e.g. 1 s). The reasons for this discrepancy were investigated, allowing us to identify simple methods to obtain the correct confidence interval of the O2 uptake kinetics parameters.
Journal of Computational and Graphical Statistics | 2003
Ruggero Bellio; Alessandra Rosalba Brazzale
One of the main criticisms against the use of higher order asymptotics is that the algebraic expressions involved are far too complex to be derived by hand in a reasonable amount of time. A further drawback is that the results are so closely tuned to the specific problem at hand that they almost always exclude the possibility of transferring available computer code to a different though similar problem. The aim of this article is to show that higher order asymptotics can be implemented in a general and flexible way so as to provide easy-to-use and self-contained software useful in routine data analysis. The programming strategy we develop easily applies to many parametric models. We illustrate it by describing the design of the core routines of the nlreg section of the S-Plus library HOA which implements higher order solutions for nonlinear heteroscedastic regression models.
Econometric Reviews | 2016
Francesco Bartolucci; Ruggero Bellio; Alessandra Salvan; Nicola Sartori
We show how modified profile likelihood methods, developed in the statistical literature, may be effectively applied to estimate the structural parameters of econometric models for panel data, with a remarkable reduction of bias with respect to ordinary likelihood methods. Initially, the implementation of these methods is illustrated for general models for panel data including individual-specific fixed effects and then, in more detail, for the truncated linear regression model and dynamic regression models for binary data formulated along with different specifications. Simulation studies show the good behavior of the inference based on the modified profile likelihood, even when compared to an ideal, although infeasible, procedure (in which the fixed effects are known) and also to alternative estimators existing in the econometric literature. The proposed estimation methods are implemented in an R package that we make available to the reader.
Scandinavian Journal of Statistics | 2003
Ruggero Bellio
This paper presents the use of likelihood-based methods for controlled calibration. Recent results on higher-order asymptotics are exploited to obtain confidence regions for the output of the calibration process. A general likelihood-based approach is presented, and several types of calibration problems are tackled within this framework. The methods provide simple and accurate solutions which may have some potential usefulness for applications. The results are illustrated with reference to widely used models. Copyright 2003 Board of the Foundation of the Scandinavian Journal of Statistics..
Statistics and Computing | 2001
Ruggero Bellio; Alessandra Rosalba Brazzale
This paper presents a set of REDUCE procedures that make a number of existing higher-order asymptotic results available for both theoretical and practical research. Attention has been restricted to the context of exact and approximate inference for a parameter of interest conditionally either on an ancillary statistic or on a statistic partially sufficient for the nuisance parameter. In particular, the procedures apply to regression-scale models and multiparameter exponential families. Most of them support algebraic computation as well as numerical calculation for a given data set. Examples illustrate the code.
Computational Statistics & Data Analysis | 2007
Ruggero Bellio
Bounded-influence estimation is a well developed and useful theory. It provides fairly efficient estimators which are robust to outliers and local model departures. However, its use has been limited thus far, mainly because of computational difficulties. A careful implementation in modern statistical software can effectively overcome the numerical problems of bounded-influence estimators. The proposed approach is based on general methods for solving estimating equations, together with suitable methods developed in the statistical literature, such as the delta algorithm and nested iterations. The focus is on Mallows estimation in generalized linear models and on optimal bias-robust estimation in models for independent data, such as regression models with asymmetrically distributed errors.
Experimental Physiology | 2015
Maria Pia Francescato; Valentina Cettolo; Ruggero Bellio
Keir et al. (2014) have recently compared different techniques used to assemble breath-by-breath pulmonary O2 uptake data from repeated step transitions in the moderate-intensity exercise domain. In particular, the authors evaluated the quality of the assembling methods by comparing what they called the ‘model confidence CI95’, where ‘CI95 is equal to the SE (derived from the sum of squared residuals from the model parameter estimates) multiplied by the t-distribution value for the 2.5% two-tailed dimensions’. The authors conclude that the data-processing technique had no effect on parameter estimates. However, ‘the narrowest interval for CI95 occurred when individual trials were linearly interpolated on a second-by-second basis and ensemble averaged’ before running the non-linear regression procedure. It can be easily demonstrated, and Keir et al. (2014) can do it using their experimental data, that a linear interpolation on a 0.5 s-by-0.5 s basis or 0.1 s-by-0.1 s basis results in a decrease of the SE, better called asymptotic standard error or ASE, which is calculated from the variance–covariance matrix of parameter estimates on the basis of the number of data points used by the linear regression procedure. As a consequence, the corresponding CI95 will also be narrower, in comparison to the values obtained by the linear interpolation on a second-by-second basis, roughly by the factor √ 1 s 0.5 s = √2 or √ 1s 0.1s = √10 , respectively (Francescato et al. 2014b). Following this reasoning, the narrowest CI95 would be obtained by linearly interpolating the data on an infinitesimal-by-infinitesimal basis. As discussed in detail by Francescato et al. (2014b), this phenomenon is due to the fact that the linear interpolation increases the number of data points used by the non-linear regression procedure without adding new information (‘cloning’ effect). Indeed, it can easily be shown that the same phenomenon takes place even for the average (and corresponding standard error) of a simple series of data, analysed as raw data or taking them twice. In the above conditions, the CI95 no longer meets the definition of confidence interval, i.e. the range of values that, over a set of notional repeated samples, would contain the true parameter of interest with a prespecified probability, usually set at 95%. The simulation performed by Francescato et al. (2014a) highlighted that when the individual trials were processed to obtain regular 1 s bins and then ensemble averaged before running the non-linear regression procedure, the ‘true’ (known) phase II time constant was included within the estimated CI95 in only 70% of cases. This means that, for this data-processing technique, the estimated CI95 does not satisfy the statistical definition of confidence interval at 95% level. The same simulation showed that when the individual trials are simply combined into a single data set, the ‘true’ (known) phase II time constant was included within the estimated CI95 in more than 94% of cases, thus being very close to the definition of the confidence interval at 95% level. In conclusion, the non-linear regression procedure applied following interpolation of individual trials on a second-by-second basis and ensemble averaging results in misleading CI95. As a consequence, in contrast to Keir et al. (2014), we believe that the CI95 estimated in this manner cannot be used as figure of merit to evaluate the quality of different data-processing techniques. Narrower confidence intervals should be obtained by increasing the information used (e.g. increasing the number of repeats).