Christoph Bergmeir
University of Granada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christoph Bergmeir.
International Journal of Sports Medicine | 2013
Alejandro Santos-Lozano; F. Santín-Medeiros; G. Cardon; Gema Torres-Luque; R. Bailón; Christoph Bergmeir; Jonatan R. Ruiz; Alejandro Lucia; Nuria Garatachea
The aims of this study were: to compare energy expenditure (EE) estimated from the existing GT3X accelerometer equations and EE measured with indirect calorimetry; to define new equations for EE estimation with the GT3X in youth, adults and older people; and to define GT3X vector magnitude (VM) cut points allowing to classify PA intensity in the aforementioned age-groups. The study comprised 31 youth, 31 adults and 35 older people. Participants wore the GT3X (setup: 1-s epoch) over their right hip during 6 conditions of 10-min duration each: resting, treadmill walking/running at 3, 5, 7, and 9 km · h⁻¹, and repeated sit-stands (30 times · min⁻¹). The GT3X proved to be a good tool to predict EE in youth and adults (able to discriminate between the aforementioned conditions), but not in the elderly. We defined the following equations: for all age-groups combined, EE (METs)=2.7406+0.00056 · VM activity counts (counts · min⁻¹)-0.008542 · age (years)-0.01380 · body mass (kg); for youth, METs=1.546618+0.000658 · VM activity counts (counts · min⁻¹); for adults, METs=2.8323+0.00054 · VM activity counts (counts · min⁻¹)-0.059123 · body mass (kg)+1.4410 · gender (women=1, men=2); and for the elderly, METs=2.5878+0.00047 · VM activity counts (counts · min⁻¹)-0.6453 · gender (women=1, men=2). Activity counts derived from the VM yielded a more accurate EE estimation than those derived from the Y-axis. The GT3X represents a step forward in triaxial technology estimating EE. However, age-specific equations must be used to ensure the correct use of this device.
Information Sciences | 2014
Lala Septem Riza; Andrzej Janusz; Christoph Bergmeir; Chris Cornelis; Francisco Herrera; Dominik Śle¸zak; José Manuel Benítez
Abstract The package RoughSets , written mainly in the R language, provides implementations of methods from the rough set theory (RST) and fuzzy rough set theory (FRST) for data modeling and analysis. It considers not only fundamental concepts (e.g., indiscernibility relations, lower/upper approximations, etc.), but also their applications in many tasks: discretization, feature selection, instance selection, rule induction, and nearest neighbor-based classifiers. The package architecture and examples are presented in order to introduce it to researchers and practitioners. Researchers can build new models by defining custom functions as parameters, and practitioners are able to perform analysis and prediction of their data using available algorithms. Additionally, we provide a review and comparison of well-known software packages. Overall, our package should be considered as an alternative software library for analyzing data based on RST and FRST.
Computer Methods and Programs in Biomedicine | 2012
Christoph Bergmeir; Miguel García Silvente; José Manuel Benítez
In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.
Information Sciences | 2016
Mabel González; Christoph Bergmeir; Isaac Triguero; Yanet Rodriguez; José Manuel Benítez
Positive unlabeled time series classification has become an important area during the last decade, as often vast amounts of unlabeled time series data are available but obtaining the corresponding labels is difficult. In this situation, positive unlabeled learning is a suitable option to mitigate the lack of labeled examples. In particular, self-training is a widely used technique due to its simplicity and adaptability. Within this technique, the stopping criterion, i.e., the decision of when to stop labeling, is a critical part, especially in the positive unlabeled context. We propose a self-training method that follows the positive unlabeled approach for time series classification and a family of parameter-free stopping criteria for this method. Our proposal uses a graphical analysis, applied to the minimum distances obtained by the k-Nearest Neighbor as the base learner, to estimate the class boundary. The proposed method is evaluated in an experimental study involving various time series classification datasets. The results show that our method outperforms the transductive results obtained by previous models.
Computational Statistics & Data Analysis | 2014
Christoph Bergmeir; Mauro Costantini; José Manuel Benítez
The usefulness of a predictor evaluation framework which combines a blocked cross-validation scheme with directional accuracy measures is investigated. The advantage of using a blocked cross-validation scheme with respect to the standard out-of-sample procedure is that cross-validation yields more precise error estimates of the prediction error since it makes full use of the data. In order to quantify the gain in precision when directional accuracy measures are considered, a Monte Carlo analysis using univariate and multivariate models is provided. The experiments indicate that more precise estimates are obtained with the blocked cross-validation procedure. An application is carried out on forecasting UK interest rate for illustration purposes. The results show that in such a situation with small samples the cross-validation scheme may have considerable advantages over the standard out-of-sample evaluation procedure as it may help to overcome problems induced by the limited information the directional accuracy measures contain due to their binary nature.
Proceedings of SPIE | 2010
Christoph Bergmeir; M. García Silvente; J. Esquivias López-Cuervo; José Manuel Benítez
Screening plays an important role within the fight against cervical cancer. One of the most challenging parts in order to automate the screening process is the segmentation of nuclei in the cervical cell images, as the difficulty for performing this segmentation accurately varies widely within the nuclei. We present an algorithm to perform this task. After background determination in an overview image, and interactive identification of regions of interest (ROIs) at lower magnification levels, ROIs are extracted and processed at the full magnification level of 40x. Subsequent to initial background removal, the image regions are smoothed by mean-shift and median filtering. Then, segmentations are generated by an adaptive threshold. The connected components in the resulting segmentations are filtered with morphological operators by characteristics such as shape, size and roundness. The algorithm was tested on a set of 50 images and was found to outperform other methods.
Computational Statistics & Data Analysis | 2018
Christoph Bergmeir; Rob J. Hyndman; Bonsoo Koo
One of the most widely used standard procedures for model evaluation in classification and regression is K-fold cross-validation (CV). However, when it comes to time series forecasting, because of the inherent serial correlation and potential non-stationarity of the data, its application is not straightforward and often replaced by practitioners in favour of an out-of-sample (OOS) evaluation. It is shown that for purely autoregressive models, the use of standard K-fold CV is possible provided the models considered have uncorrelated errors. Such a setup occurs, for example, when the models nest a more appropriate model. This is very common when Machine Learning methods are used for prediction, and where CV can control for overfitting the data. Theoretical insights supporting these arguments are presented, along with a simulation study and a real-world example. It is shown empirically that K-fold CV performs favourably compared to both OOS evaluation and other time-series-specific techniques such as non-dependent cross-validation.
IEEE Transactions on Neural Networks | 2012
Christoph Bergmeir; Isaac Triguero; Daniel Molina; José Luis Aznarte; José Manuel Benítez
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
ieee international conference on fuzzy systems | 2014
Lala Septem Riza; Christoph Bergmeir; Francisco Herrera; José Manuel Benítez
Learning from data is a process to construct a model according to available training data so that it can be used to make predictions for new data. Nowadays, several software libraries are available to carry out this task, frbs is an R package which is aimed to construct models from data based on fuzzy rule based systems (FRBSs) by employing learning procedures from Computational Intelligence (e.g., neural networks and genetic algorithms) to tackle classification and regression problems. For the learning process, frbs considers well-known methods, such as Wang and Mendels technique, ANFIS, Hy-FIS, DENFIS, subtractive clustering, SLAVE, and several others. Many options are available to perform conjunction, disjunction, and implication operators, defuzzification methods, and membership functions (e.g., triangle, trapezoid, Gaussian, etc). It has been developed in the R language which is an open-source analysis environment for scientific computing. In this paper, we also provide some examples on the usage of the package and a comparison with other software libraries implementing FRBSs. We conclude that frbs should be considered as an alternative software library for learning from data.
IEEE Transactions on Cloud Computing | 2016
Francisco Javier Baldan; Sergio Ramírez-Gallego; Christoph Bergmeir; Jose M. Benitez-Sanchez; Francisco Herrera
Cloud Computing is an essential paradigm of computing services based on the “elasticity” property, where available resources are adapted efficiently to different workloads over time. In elastic platforms, the forecasting component can be considered by far the most important element and the differentiating factor when comparing such systems, with workload forecasting one of the problems to solve if we want to achieve a truly elastic system. When properly addressed the cloud workload forecasting problem becomes a really interesting case study. As there is no general methodology in the literature that addresses this problem analytically and from a time series forecasting perspective (even less so in the cloud field), we propose a combination of these tools based on a state-of-the-art forecasting methodology which we have enhanced with some elements, such as: a specific cost function, statistical tests, visual analysis, etc. The insights obtained from this analysis are used to detect the asymmetrical nature of the forecasting problem and to find the best forecasting model from the viewpoint of the current state of the art in time series forecasting. From an operational point of view the most interesting forecast is a short-time horizon, so we focus on this. To show the feasibility of this methodology, we apply it to several realistic workload datasets from different datacenters. The results indicate that the analyzed series are non-linear in nature and that no seasonal patterns can be found. Moreover, on the analyzed datasets, the penalty cost as usually included in the SLA can be reduced to a 30 percent on average.